CN113204426A - Task processing method of resource pool and related equipment - Google Patents
Task processing method of resource pool and related equipment Download PDFInfo
- Publication number
- CN113204426A CN113204426A CN202110468537.0A CN202110468537A CN113204426A CN 113204426 A CN113204426 A CN 113204426A CN 202110468537 A CN202110468537 A CN 202110468537A CN 113204426 A CN113204426 A CN 113204426A
- Authority
- CN
- China
- Prior art keywords
- resource pool
- utilization rate
- target system
- resource
- queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 15
- 238000012545 processing Methods 0.000 claims abstract description 63
- 238000000034 method Methods 0.000 claims abstract description 49
- 230000008569 process Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004891 communication Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 241000508269 Psidium Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000033772 system development Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The embodiment of the disclosure provides a task processing method and device of a resource pool, a computer readable storage medium and electronic equipment, and belongs to the technical field of computers and communication. The method comprises the following steps: acquiring the category, the resource quantity and a queuing queue of a target system resource pool; acquiring the central processing unit utilization rate, the memory utilization rate and the disk utilization rate of a server of the target system; when tasks are queued in a queue of the resource pool and the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server are all lower than warning lines, the quantity of the resources in the resource pool is increased; and processing the queuing tasks in the queuing queue of the resource pool by using the newly added resources of the resource pool. The method disclosed by the invention can intelligently and automatically allocate the resource quantity of the resource pool according to the actual running condition of the target system and the use condition of the resource pool.
Description
Technical Field
The present disclosure relates to the field of computer and communication technologies, and in particular, to a method and an apparatus for processing a task in a resource pool, a computer-readable storage medium, and an electronic device.
Background
Currently, the development of systems is not open to the use of resource pools, such as: a database connection pool, a thread pool and the like. How to set the appropriate resource amount of the resource pool so that the system exerts the maximum efficiency is a problem to be concerned in system development.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The embodiment of the disclosure provides a task processing method and device for a resource pool, a computer-readable storage medium and an electronic device, which can intelligently and automatically allocate the resource quantity of the resource pool according to the actual running condition of a target system and the use condition of the resource pool.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, a method for processing tasks of a resource pool is provided, including:
acquiring the category, the resource quantity and a queuing queue of a target system resource pool;
acquiring the central processing unit utilization rate, the memory utilization rate and the disk utilization rate of a server of the target system;
when tasks are queued in a queue of the resource pool and the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server are all lower than warning lines, the quantity of the resources in the resource pool is increased;
and processing the queuing tasks in the queuing queue of the resource pool by using the newly added resources of the resource pool.
In one embodiment, obtaining the category, the resource number and the queuing queue of the target system resource pool comprises:
mounting the target system in a plug-in mode;
acquiring a compiled file loaded by the target system and used by the target system;
and acquiring the type, the resource quantity and the queuing queue of the target system resource pool according to the compiled file.
In one embodiment, queuing tasks in a queue of the resource pool comprises:
the queuing rate of tasks in the queue of the resource pool is higher than the first rate.
In one embodiment, the method further comprises:
and when idle lines exist in the resource quantity of the resource pool, reducing the resource quantity of the resource pool.
In one embodiment, the method further comprises:
acquiring the number of tasks processed in unit time of the target system;
and after the number of the resources of the resource pool is increased, if the number of the tasks processed in the unit time of the target system is reduced, the newly created resources of the resource pool are cancelled.
In one embodiment, obtaining the number of tasks processed per unit time of the target system comprises:
and acquiring data of the port used by the target system to acquire the number of tasks processed by the target system in unit time.
In one embodiment, the method further comprises:
and when one of the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server of the target system is higher than or equal to a warning line, stopping creating the resources of the resource pool or cancelling the newly created resources of the resource pool.
According to an aspect of the present disclosure, there is provided a task processing apparatus of a resource pool, including:
the first acquisition module is configured to acquire the type, the resource quantity and the queuing queue of the target system resource pool;
the second acquisition module is configured to acquire the central processing unit utilization rate, the memory utilization rate and the disk utilization rate of the server of the target system;
the increasing module is configured to increase the resource quantity of the resource pool when tasks are queued in a queue of the resource pool and the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server are all lower than a warning line;
and the processing module is configured to process the queued tasks in the queuing queue of the resource pool by using the newly added resources of the resource pool.
According to an aspect of the present disclosure, there is provided an electronic device including:
one or more processors;
a storage device configured to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of any of the above embodiments.
According to an aspect of the present disclosure, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the method of any one of the above embodiments.
In the technical solutions provided by some embodiments of the present disclosure, the resource amount of the resource pool can be intelligently and automatically allocated according to the actual operation condition of the target system and the usage condition of the resource pool.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The following figures depict certain illustrative embodiments of the invention in which like reference numerals refer to like elements. These described embodiments are to be considered as exemplary embodiments of the disclosure and not limiting in any way.
Fig. 1 shows a schematic diagram of an exemplary system architecture of a task processing device of a resource pool or a task processing method of a resource pool to which an embodiment of the present disclosure may be applied;
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device implementing embodiments of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method of task processing of a resource pool according to an embodiment of the present disclosure;
FIG. 4 schematically shows a block diagram of a task processing device of a resource pool according to an embodiment of the present disclosure;
FIG. 5 schematically shows a block diagram of a task processing device of a resource pool according to another embodiment of the present invention;
fig. 6 schematically shows a block diagram of a task processing device of a resource pool according to another embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 of a task processing device of a resource pool or a task processing method of a resource pool to which the embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
A user (client) may use the terminal devices 101, 102, 103 to interact with the server 105 over the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having display screens including, but not limited to, smart phones, tablets, portable and desktop computers, digital cinema projectors, and the like.
The server 105 may be a server that provides various services. For example, the user may access the server 105 using the terminal device 103 (which may also be the terminal device 101 or 102). The server 105 may obtain the category, the resource quantity, and the queue of the target system resource pool; acquiring the central processing unit utilization rate, the memory utilization rate and the disk utilization rate of a server of the target system; when tasks are queued in a queue of the resource pool and the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server are all lower than warning lines, the quantity of the resources in the resource pool is increased; and processing the queuing tasks in the queuing queue of the resource pool by using the newly added resources of the resource pool. The server 105 may dynamically adjust the resource pool of the server of the target system, so that the adjusted resource quantity condition of the resource pool is displayed on the terminal device 103, and the user may view the resource quantity condition of the corresponding resource pool based on the content displayed on the terminal device 103.
Also, for example, the terminal device 103 (also may be the terminal device 101 or 102) may be a smart tv, a VR (Virtual Reality)/AR (Augmented Reality) helmet display, or a mobile terminal such as a smart phone, a tablet computer, etc. on which navigation, network appointment, instant messaging, video Application (APP) and the like are installed, and the user may view the resource amount of the resource pool through the smart tv, the VR/AR helmet display or the navigation, network appointment, instant messaging, video APP. The server 105 can adjust the resource quantity of the resource pool according to the resource quantity and the queue of the resource pool of the target system and the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server of the target system, and return the adjusted resource quantity of the resource pool to the smart television, the VR/AR helmet display or the navigation, the network appointment, the instant messaging and the video APP, so that the returned adjusted resource quantity of the resource pool is displayed through the smart television, the VR/AR helmet display or the navigation, the network appointment, the instant messaging and the video APP.
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present disclosure.
It should be noted that the computer system 200 of the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU)201 that can perform various appropriate actions and processes in accordance with a program stored in a Read-Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data necessary for system operation are also stored. The CPU 201, ROM 202, and RAM 203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 208 including a hard disk and the like; and a communication section 209 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 210 as necessary, so that a computer program read out therefrom is installed into the storage section 208 as necessary.
In particular, the processes described below with reference to the flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication section 209 and/or installed from the removable medium 211. The computer program, when executed by a Central Processing Unit (CPU)201, performs various functions defined in the methods and/or apparatus of the present application.
It should be noted that the computer readable storage medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM) or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF (Radio Frequency), etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods, apparatus, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and/or units and/or sub-units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described modules and/or units and/or sub-units may also be disposed in a processor. Wherein the names of such modules and/or units and/or sub-units in some cases do not constitute a limitation on the modules and/or units and/or sub-units themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiment; or may exist separately without being assembled into the electronic device. The computer-readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 3.
In the related art, for example, a machine learning method, a deep learning method, or the like may be used to optimize task processing of a resource pool, and the application range of different methods is different.
Fig. 3 schematically shows a flowchart of a task processing method of a resource pool according to an embodiment of the present disclosure. The method steps of the embodiments of the present disclosure may be executed by a terminal device or a server, or may be executed by a terminal device and a server interactively, for example, the server 105 in fig. 1 may be executed, but the present disclosure is not limited thereto.
In step S310, the category, the resource amount, and the queue of the target system resource pool are obtained.
In this step, the terminal device or the server may obtain the category, the resource amount, and the queuing queue of the target system resource pool. In one embodiment, the class, number of resources, and queuing queue of the resource pool of the target system, i.e., the class, number of resources, and queuing queue of the resource pool that the target system operates or uses. Categories are for example the kind of resource pool; the resource amount is, for example, the amount of resources such as threads included in the resource pool; the queue is, for example, a work operation condition of a thread in a resource pool, for example, a certain resource pool has 10 resources, currently, there are 15 tasks in total, 10 resources respectively process 10 tasks, and 5 tasks are queued to be processed, in which case, the task queue rate of the resource pool is 50%.
In the embodiments of the present disclosure, a terminal device or a server may be implemented in various forms. For example, the terminal described in the present disclosure may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a task processing device of a resource pool, a wearable device, a smart band, a pedometer, a robot, an unmanned vehicle, and the like, and a fixed terminal such as a digital TV (television), a desktop computer, and the like.
In step S320, a central processing unit utilization rate, a memory utilization rate, and a disk utilization rate of the server of the target system are obtained.
In this step, the terminal device or the server may obtain a central processing unit usage rate, a memory usage rate, and a disk usage rate of the server of the target system. In this step, the server of the target system and the execution server of the method of the present application may be the same server or different servers, and the present application is not limited thereto.
In step S330, when there is a task queue in the queue of the resource pool and the cpu utilization, the memory utilization, and the disk utilization of the server are all lower than the warning lines, the resource amount of the resource pool is increased.
In this step, the terminal device or the server may queue a task in a queue of the resource pool, and increase the resource amount of the resource pool when the central processing unit usage rate, the memory usage rate, and the disk usage rate of the server are all lower than a warning line, where the warning line is, for example, 80%. In one embodiment, queuing tasks in a queue of the resource pool comprises: the queuing rate of the tasks in the queuing queue of the resource pool is higher than a first ratio, and the first ratio is 30% for example.
In step S340, the newly added resource of the resource pool is used to process the queued task in the queuing queue of the resource pool.
In this step, the terminal device or the server processes the queuing tasks in the queuing queue of the resource pool by using the newly added resources of the resource pool.
In one embodiment, the task processing method of the resource pool further includes: and when idle lines exist in the resource quantity of the resource pool, reducing the resource quantity of the resource pool.
In one embodiment, the task processing method of the resource pool further includes: acquiring the task quantity (tps) processed by the target system in unit time; and after the number of the resources of the resource pool is increased, if the number of the tasks processed in the unit time of the target system is reduced, the newly created resources of the resource pool are cancelled.
In one embodiment, obtaining the number of tasks processed per unit time of the target system comprises: and acquiring data of the port used by the target system to acquire the number of tasks processed by the target system in unit time.
In one embodiment, the task processing method of the resource pool further includes: and when one of the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server of the target system is higher than or equal to a warning line, stopping creating the resources of the resource pool or cancelling the newly created resources of the resource pool.
The technical scheme of the application can intelligently and automatically allocate the resource quantity of the resource pool according to the actual running condition of the target system and the service condition of the resource pool, so that the resource pool can play the maximum function in the actual system running context environment, the resource can be more reasonably used, and the performance and the safety of the system can be ensured.
In one embodiment, a task processing device of a resource pool of the present application may include: 1. a resource pool tracing device; 2. system operating condition collector means; 3. resource pool queuing queue monitor means; 4. a resource pool lifecycle tracker device.
1. And the resource pool tracing device is used for automatically and intelligently tracing the resource pool information used in the target system to obtain the attribute value of the resource pool for the resource pool life cycle tracker device to use.
The source tracing device is mounted to the target system in a plug-in mode, and the target system is not invaded. The method is as follows:
java–javaagent:resourcePoll.jar
the detailed process of the tracing device is as follows:
1) start of
2) Target system start-up
3) The device starts to work
4) When the compiled file used by the target system loaded by the target system is acquired, the compiled file used by the target system is loaded when the target system is started (for example: java's classloader), and resource pool information used by the target system can be obtained according to the loaded file;
5) from the compiled files of the target system (e.g.: class file of java), finds the instruction that created the resource (e.g., the new Thread instruction of java that created the Thread resource);
(corresponding to the source code: Thread new Thread (group, r, namePrefix + Thread number. getAddress increment (), 0);)
6) Byte code enhancement before and after the instruction for creating resources (for calling system operation condition data collector to obtain system operation data, compiled by source code)
7) From the method (javap technique) in which the instruction to create the resource is located, the name of the method to create the resource is obtained, for example: public Thread new Thread (Runnable r)
8) If the class where the instruction for creating the resource is located is ended by Pool, the resource Pool class is obtained
9) If not ending with Pool, for example: class
10) Tracing upwards to find the class of the class where the reference instruction method is located, if there is a Pool tail, obtaining the resource Pool class, such as ThreadPool Executor
11) If there is no class ending with Pool, the command indicating the resource Pool does not conform to the automatic recognition rule
12) The device will provide a configuration variable, poolClass, for the user to specify the class name of the resource pool
13) Finding the resource pool size attribute: maximunPoolSize: maximum number of resources, queue workQueue
14) Compiling a file from a resource pool class (e.g.: class's constant pool), auto-lookup follows the rule max poilsize (e.g.: maxiumpoolsize) and rule-by-rule Queue (e.g.: workQueue) queue attribute.
15) If not, the device will provide configuration variables: maximunPoolSize: maximum number of resources and workQueue, user specified.
16) To this end, the following data were obtained:
resource pool classes
Maximum number of resources attribute
Queuing queue attributes
For example:
resource pool classes | Maximum number of resources attribute | Queuing queue attributes |
ThreadPoolExecutor.class | maximunPoolSize | workQueue |
HikariPool.class | config.maximumPoolSize | connectionBag |
17) Storing
Local Cache storage technology (Guava Cache, Ehcache) may be employed.
2. And the system running condition collector device is used for collecting the running data of the target system for use in a scene with the set size of the resource pool. Therefore, the purpose of determining the safe setting of the size of the resource pool according to the system operation condition is achieved.
If the size of the resource pool is too large, the resource pool can cause insufficient server resources or frequent context switching, so that the tps (the number of tasks processed in unit time) of the system is reduced, and if the size of the resource pool is too small, the tps of the system can not be improved, and waste of hardware resources is caused.
The device also adopts the technology of the plug-in, starts in the plug-in resource page.
The detailed process is as follows:
1) start of
2) Starting with target system start-up
3) Starting to collect the total tps information of the target system operation:
a port at the inlet of the target system is obtained,
tomcat item, default 8080, user-specifiable
Rpc entries, such as dubbo, default 20880, user-specifiable
Data acquisition for the ports used by the target system (Tcpdump, request-in quantity, request-out quantity)
From which the total number of transactions per second that can be completed (roughly equivalent to the number requested per second) can be derived
The requested number is: the direction is out of the target system and the message type is push.
4) Collecting target system cpu usage
5) Collecting target system memory usage
6) Memory usage rate (used/total)
Collecting target system disk usage
7) Obtaining data:
TPS | cpu utilization (%) | Memory usage (%) | Disk usage (%) |
50 | 5 | 5 | 5 |
150 | 15 | 15 | 10 |
3. Resource pool queuing queue monitor device
The method is used for monitoring the queuing condition of the resource pool of the system, and aims to enable the task which is queued to acquire the resource to process the scheduling as soon as possible under the condition of current performance operation instead of waiting blindly, thereby improving the tps of the whole system.
The device is also started in a plug-in module resource page.
The detailed process is as follows:
1) start of
2) Device with target system start-up
3) Resource pool creation completion
4) Queuing queue for monitoring resource pool
5) Obtaining queuing rate data
6) Data transmission resource pool life cycle tracker device
7) The resource pool life cycle tracker device takes the queuing rate and then takes the performance data of the current system operation from the system operation condition collector device
8) If the rule is satisfied
9) The resource creation extension work is performed.
4. Resource pool life cycle tracker device
The method is used for tracking the life cycle (resource creation and resource queuing) of the resource pool used in the target system, so that the resource creation and queuing conditions of the resource pool can be intelligently and safely carried out and distributed according to the actual running condition of the system and the use condition of the resource. A series of problems caused by unreasonable setting of the user resource pool capacity are avoided (for example, the resource pool is too large, so that the server resource is insufficient, or the context is frequently switched, so that the tps of the system is reduced, and if the resource pool is too small, the tps of the system cannot be improved, so that the waste of hardware resources and the like are caused).
The device also adopts a plug-in technology, starts in a plug-in resource page.
The detailed process is as follows:
1) start of
2) Receiving the performance characteristic value of the target system and the original queuing information (tps, cpu utilization rate, memory utilization rate, disk utilization rate and resource queuing rate) and starting the target system to initialize the resource pool
3) According to the rule, the resource pool size is distributed
4) The device will have default rules that the user can modify, for example:
cpu usage reaches 80%, resource creation is stopped
The memory utilization rate reaches 80 percent, and the resource creation is stopped
The utilization rate of the disk reaches 80 percent, and the creation of resources is stopped
After creation of resources, tps withdraws creation (reduces pool size and allows pool to be reclaimed)
The queuing rate (the number of queued tasks/the size of the queued queue) of the resource pool is greater than or equal to 30 percent, the CPU utilization rate, the memory utilization rate and the disk utilization rate are all less than 80 percent, and the resource pool is expanded (the size of the resource pool is modified to be online, so that the resource pool can continue to create resources, queued tasks can be quickly processed, and the throughput of the system is improved)
Note: in this case, it is indicated that the size of the set resource pool is not enough, and the performance resources of the performance are redundant, which is a resource quantity that can completely support more performance, so that the resource quantity can be expanded, and the system efficiency can be improved.
8) If the current operation condition of the target system conforms to the creation rule, continuing to create
9) If not, stopping creating and updating the resource pool attribute maxiumPoolSize: number of resource sizes.
Fig. 4 schematically shows a block diagram of a task processing device of a resource pool according to an embodiment of the present disclosure. The task processing device 400 of the resource pool provided in the embodiment of the present disclosure may be provided in a terminal device, or may be provided in a server, for example, the server 103 in fig. 1, but the present disclosure is not limited thereto.
The task processing device 400 of the resource pool provided by the embodiment of the present disclosure may include a first obtaining module 410, a second obtaining module 420, an adding module 430, and a processing module 440.
The first acquisition module is configured to acquire the type, the resource quantity and the queuing queue of a target system resource pool;
the second acquisition module is configured to acquire the central processing unit utilization rate, the memory utilization rate and the disk utilization rate of the server of the target system;
the increasing module is configured to increase the resource quantity of the resource pool when tasks are queued in a queue of the resource pool and the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server are all lower than a warning line;
and the processing module is configured to process the queued tasks in the queuing queue of the resource pool by using the newly added resources of the resource pool.
The task processing device 400 of the resource pool can intelligently and automatically allocate the resource quantity of the resource pool according to the actual running condition of the target system and the use condition of the resource pool, so that the resource pool can play the maximum function in the actual system running context environment, the use of the resource is more reasonable, and the performance and the safety of the system can be ensured.
According to the embodiment of the present disclosure, the task processing device 400 of the resource pool may be used to implement the task processing method of the resource pool described in the embodiment of fig. 3.
Fig. 5 schematically shows a block diagram of a task processing device 500 of a resource pool according to another embodiment of the present invention.
As shown in fig. 5, the task processing device 500 of the resource pool further includes a display module 510 in addition to the first obtaining module 410, the second obtaining module 420, the adding module 430 and the processing module 440 described in the embodiment of fig. 4.
Specifically, the display module 510 displays the number of resources of the resource pool to the client after the increase module 430 increases the number of resources of the resource pool.
In the task processing device 500 of the resource pool, the display module 510 may display the adjusted resource amount of the resource pool.
Fig. 6 schematically shows a block diagram of a task processing device 600 of a resource pool according to another embodiment of the present invention.
As shown in fig. 6, in addition to the first obtaining module 410, the second obtaining module 420, the adding module 430 and the processing module 440 described in the embodiment of fig. 4, the task processing device 600 of the resource pool further includes a storage module 610.
Specifically, the storage module 610 is configured to store data in task processing of the resource pool, so as to facilitate subsequent invocation and reference.
It is understood that the first obtaining module 410, the second obtaining module 420, the adding module 430, the processing module 440, the displaying module 510, and the storing module 610 may be combined to be implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present invention, at least one of the first obtaining module 410, the second obtaining module 420, the adding module 430, the processing module 440, the display module 510, and the storage module 610 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or any other reasonable manner of integrating or packaging a circuit, or implemented in a suitable combination of three implementations of software, hardware, and firmware. Alternatively, at least one of the first acquiring module 410, the second acquiring module 420, the adding module 430, the processing module 440, the displaying module 510, and the storing module 610 may be at least partially implemented as a computer program module that, when executed by a computer, may perform the functions of the respective modules.
For details that are not disclosed in the embodiment of the apparatus of the present invention, please refer to the embodiment of the task processing method of the resource pool described above for the details that are not disclosed in the embodiment of the apparatus of the present invention, because each module of the task processing apparatus of the resource pool of the exemplary embodiment of the present invention can be used to implement the steps of the exemplary embodiment of the task processing method of the resource pool described above in fig. 3.
The specific implementation of each module, unit and subunit in the task processing apparatus of the resource pool provided in the embodiments of the present disclosure may refer to the content in the task processing method of the resource pool, and is not described herein again.
It should be noted that although several modules, units and sub-units of the apparatus for action execution are mentioned in the above detailed description, such division is not mandatory. Indeed, the features and functionality of two or more modules, units and sub-units described above may be embodied in one module, unit and sub-unit, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module, unit and sub-unit described above may be further divided into embodiments by a plurality of modules, units and sub-units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A task processing method of a resource pool is characterized by comprising the following steps:
acquiring the category, the resource quantity and a queuing queue of a target system resource pool;
acquiring the central processing unit utilization rate, the memory utilization rate and the disk utilization rate of a server of the target system;
when tasks are queued in a queue of the resource pool and the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server are all lower than warning lines, the quantity of the resources in the resource pool is increased;
and processing the queuing tasks in the queuing queue of the resource pool by using the newly added resources of the resource pool.
2. The method of claim 1, wherein obtaining the class, the number of resources, and the queue of the target system resource pool comprises:
mounting the target system in a plug-in mode;
acquiring a compiled file loaded by the target system and used by the target system;
and acquiring the type, the resource quantity and the queuing queue of the target system resource pool according to the compiled file.
3. The method of claim 1, wherein queuing tasks in a queue of the resource pool comprises:
the queuing rate of tasks in the queue of the resource pool is higher than the first rate.
4. The method of claim 1, further comprising:
and when idle lines exist in the resource quantity of the resource pool, reducing the resource quantity of the resource pool.
5. The method of claim 1, further comprising:
acquiring the number of tasks processed in unit time of the target system;
and after the number of the resources of the resource pool is increased, if the number of the tasks processed in the unit time of the target system is reduced, the newly created resources of the resource pool are cancelled.
6. The method of claim 5, wherein obtaining the number of tasks processed per unit time of the target system comprises:
and acquiring data of the port used by the target system to acquire the number of tasks processed by the target system in unit time.
7. The method of claim 1, further comprising:
and when one of the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server of the target system is higher than or equal to a warning line, stopping creating the resources of the resource pool or cancelling the newly created resources of the resource pool.
8. A task processing apparatus of a resource pool, comprising:
the first acquisition module is configured to acquire the type, the resource quantity and the queuing queue of the target system resource pool;
the second acquisition module is configured to acquire the central processing unit utilization rate, the memory utilization rate and the disk utilization rate of the server of the target system;
the increasing module is configured to increase the resource quantity of the resource pool when tasks are queued in a queue of the resource pool and the utilization rate of a central processing unit, the utilization rate of a memory and the utilization rate of a disk of the server are all lower than a warning line;
and the processing module is configured to process the queued tasks in the queuing queue of the resource pool by using the newly added resources of the resource pool.
9. An electronic device, comprising:
one or more processors;
a storage device configured to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110468537.0A CN113204426A (en) | 2021-04-28 | 2021-04-28 | Task processing method of resource pool and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110468537.0A CN113204426A (en) | 2021-04-28 | 2021-04-28 | Task processing method of resource pool and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113204426A true CN113204426A (en) | 2021-08-03 |
Family
ID=77029418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110468537.0A Pending CN113204426A (en) | 2021-04-28 | 2021-04-28 | Task processing method of resource pool and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113204426A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115794353A (en) * | 2022-12-30 | 2023-03-14 | 中国联合网络通信集团有限公司 | Cloud network service quality optimization processing method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5165960B2 (en) * | 2006-08-07 | 2013-03-21 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Balancing resource sharing and application latency in data processing systems |
CN103560915A (en) * | 2013-11-07 | 2014-02-05 | 浪潮(北京)电子信息产业有限公司 | Method and system for managing resources in cloud computing system |
CN105320565A (en) * | 2014-07-31 | 2016-02-10 | 中国石油化工股份有限公司 | Computer resource scheduling method for various application software |
CN111767199A (en) * | 2020-06-24 | 2020-10-13 | 中国工商银行股份有限公司 | Resource management method, device, equipment and system based on batch processing operation |
-
2021
- 2021-04-28 CN CN202110468537.0A patent/CN113204426A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5165960B2 (en) * | 2006-08-07 | 2013-03-21 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Balancing resource sharing and application latency in data processing systems |
CN103560915A (en) * | 2013-11-07 | 2014-02-05 | 浪潮(北京)电子信息产业有限公司 | Method and system for managing resources in cloud computing system |
CN105320565A (en) * | 2014-07-31 | 2016-02-10 | 中国石油化工股份有限公司 | Computer resource scheduling method for various application software |
CN111767199A (en) * | 2020-06-24 | 2020-10-13 | 中国工商银行股份有限公司 | Resource management method, device, equipment and system based on batch processing operation |
Non-Patent Citations (2)
Title |
---|
吴晟: "Apache SkyWalking实战", 31 July 2020, 机械工业出版社, pages: 27 * |
张亚: "深入理解JVM字节码", 31 May 2020, 机械工业出版社, pages: 175 - 176 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115794353A (en) * | 2022-12-30 | 2023-03-14 | 中国联合网络通信集团有限公司 | Cloud network service quality optimization processing method, device, equipment and storage medium |
CN115794353B (en) * | 2022-12-30 | 2024-02-23 | 中国联合网络通信集团有限公司 | Cloud network service quality optimization processing method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107729139B (en) | Method and device for concurrently acquiring resources | |
US20220004480A1 (en) | Log data collection method, log data collection device, storage medium, and log data collection system | |
US20220229701A1 (en) | Dynamic allocation of computing resources | |
CN110706093A (en) | Accounting processing method and device | |
CN115237589A (en) | SR-IOV-based virtualization method, device and equipment | |
CN113204426A (en) | Task processing method of resource pool and related equipment | |
CN112561301A (en) | Work order distribution method, device, equipment and computer readable medium | |
WO2023273564A1 (en) | Virtual machine memory management method and apparatus, storage medium, and electronic device | |
CN112817687A (en) | Data synchronization method and device | |
CN116032614A (en) | Container network micro-isolation method, device, equipment and medium | |
CN115437709A (en) | Method and device for loading application home page splash screen resources | |
CN112148448B (en) | Resource allocation method, apparatus, device and computer readable medium | |
CN114374657A (en) | Data processing method and device | |
CN112163176A (en) | Data storage method and device, electronic equipment and computer readable medium | |
CN112527454A (en) | Container group scheduling method and device, electronic equipment and computer readable medium | |
CN110825920A (en) | Data processing method and device | |
CN112988806A (en) | Data processing method and device | |
CN114153620B (en) | Optimal allocation method and device for Hudi operating environment resources | |
CN115794353B (en) | Cloud network service quality optimization processing method, device, equipment and storage medium | |
CN111625524B (en) | Data processing method, device, equipment and storage medium | |
CN115037729B (en) | Data aggregation method, device, electronic equipment and computer readable medium | |
CN113177173B (en) | Data access method, device, equipment and storage medium | |
CN118132010B (en) | Data storage method and device | |
CN113934761A (en) | Data processing method and device | |
CN113065042A (en) | Management method and device of terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |