CN111142943A - Automatic control concurrency method and device - Google Patents

Automatic control concurrency method and device Download PDF

Info

Publication number
CN111142943A
CN111142943A CN201911374018.7A CN201911374018A CN111142943A CN 111142943 A CN111142943 A CN 111142943A CN 201911374018 A CN201911374018 A CN 201911374018A CN 111142943 A CN111142943 A CN 111142943A
Authority
CN
China
Prior art keywords
threads
thread
percentage
concurrency
maximum
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911374018.7A
Other languages
Chinese (zh)
Inventor
陈国杰
刘頔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN201911374018.7A priority Critical patent/CN111142943A/en
Publication of CN111142943A publication Critical patent/CN111142943A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3818Decoding for concurrent execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides an automatic control concurrency method and device, wherein the method comprises the following steps: configuring concurrent processing parameters in an XML file to generate a concurrent processing file script in an XML format, wherein the concurrent processing parameters comprise the number of core threads, the maximum use percentage of a CPU (Central processing Unit) and the size of a task queue; receiving concurrent processing task data; and determining the concurrency number of the threads in the dynamic agent control thread pool according to the number of the core threads, the maximum using percentage of the CPU, the size of the task queue and the concurrent processing task data, and realizing automatic control of concurrent processing. The proposal realizes the control of the concurrency number of the threads in the thread pool by the dynamic agent according to the percentage of CPU usage, thereby achieving the purpose of automatically controlling concurrency and improving the concurrency performance and the utilization rate of system resources.

Description

Automatic control concurrency method and device
Technical Field
The invention relates to the technical field of concurrency control, in particular to an automatic concurrency control method and device.
Background
In the development of programs, due to the fact that business scenes are increasingly demanded, users have increasingly more access volumes, and the generated data volume is increasingly large, most systems adopt a thread pool concurrent processing method for processing data more quickly and responding quickly, wherein the processing is controlled according to configuration parameters (namely the maximum thread number) in a thread pool. And when the threads are executed concurrently, checking whether the used threads in the thread pool reach the maximum thread number, if so, not executing the threads concurrently, and otherwise, creating a new thread for execution. The existing system can configure different maximum thread numbers according to different traffic volumes, but a thread pool with a fixed range is set, the maximum numerical value of the thread is difficult to accurately estimate according to different traffic volumes, the maximum numerical value is reasonable, if the configuration is overlarge, a CPU (central processing unit) can be used for reaching 100 percent, the resource of the system is wasted, and in the process of concurrent execution, different switching increases time consumption, so that the concurrent processing speed is slowed down. If the configuration is too small, the rapid processing and response of the data of the service cannot be met. All the above methods have disadvantages, not only the concurrency performance is not improved, but also the utilization rate of resources is not improved.
Disclosure of Invention
The embodiment of the invention provides an automatic concurrency control method and device, and solves the technical problem that concurrency performance and resource utilization rate cannot be improved in the prior art.
The embodiment of the invention provides an automatic control concurrency method, which comprises the following steps:
configuring concurrent processing parameters in an XML file to generate a concurrent processing file script in an XML format, wherein the concurrent processing parameters comprise the number of core threads, the maximum use percentage of a CPU (Central processing Unit) and the size of a task queue;
receiving concurrent processing task data;
and determining the concurrency number of the threads in the dynamic agent control thread pool according to the number of the core threads, the maximum using percentage of the CPU, the size of the task queue and the concurrent processing task data, and realizing automatic control of concurrent processing.
The embodiment of the invention also provides an automatic control concurrency device, which comprises:
the configuration module is used for configuring concurrent processing parameters in the XML file and generating a concurrent processing file script in an XML format, wherein the concurrent processing parameters comprise the number of core threads, the maximum use percentage of a CPU and the size of a task queue;
the receiving module is used for receiving and processing the task data;
and the automatic control concurrency processing module is used for determining the concurrency number of the threads in the dynamic agent control thread pool according to the number of the core threads, the maximum use percentage of the CPU, the size of the task queue and the concurrent processing task data, so as to realize automatic control concurrency processing.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program for executing the method.
In the embodiment of the invention, the concurrency number of the threads in the thread pool is controlled by the dynamic agent according to the percentage of CPU usage, so that the purpose of automatically controlling concurrency is achieved, and the concurrency performance and the utilization rate of system resources can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart (one) of an automatic control concurrency method provided by an embodiment of the present invention;
fig. 2 is a flow chart of an automatic concurrency control method according to an embodiment of the present invention (ii);
fig. 3 is a block diagram (one) of an automatic control concurrency device according to an embodiment of the present invention;
fig. 4 is a block diagram of an automatic control concurrency device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Explanation of technical terms
A Central Processing Unit (CPU) is an ultra-large scale integrated circuit, and is an operation Core (Core) and a Control Core (Control Unit) of a computer. Its functions are mainly to interpret computer instructions and to process data in computer software. The cpu mainly includes an Arithmetic Unit (ALU), a Cache memory (Cache), and a Data (Data), control and status Bus (Bus) for implementing the connection between them. It is called three core components of an electronic computer together with an internal Memory (Memory) and an input/output (I/O) device.
Multithreading refers to a technique for implementing concurrent execution of multiple threads from software or hardware. The computer with multithreading capability can execute more than one thread at the same time due to the hardware support, thereby improving the overall processing performance. Systems with this capability include symmetric multiprocessors, multi-core processors, and Chip-level multiprocessing (Chip-level multithreading) or Simultaneous multithreading (Simultaneous multithreading) processors. In a program, these independently running program fragments are called "threads" (threads), and the concept of programming using them is called "Multithreading". The computer with multithreading capability can execute more than one thread at the same time due to the hardware support, thereby improving the overall processing performance.
Thread pools are a form of multi-threaded processing in which tasks are added to a queue and then automatically started after a thread is created. The thread pool threads are all background threads. Each thread uses a default stack size, runs at a default priority, and is in a multi-threaded unit. If a thread is idle in managed code (e.g., waiting for an event), the thread pool will insert another helper thread to keep all processors busy. If all thread pool threads remain busy all the time, but pending work is contained in the queue, the thread pool will create another helper thread after a period of time but the number of threads never exceeds the maximum. Threads that exceed the maximum value may be queued, but they wait until other threads are completed before starting.
XML file: by extensible markup language, a subset of the standard generalized markup language, is a markup language for making electronic documents structured. XML is an important tool for Internet data transmission, can span any platform of the Internet, is not limited by programming languages and operating systems, and can be said to be a data carrier with the highest level pass of the Internet. XML is a fairly powerful technique in currently processing structured document information, which helps to shuttle structured data between servers, making developers more comfortable controlling the storage and transfer of data. XML is a source language that is used to mark electronic documents with structured markup language, can be used to mark data, define data types, and allows users to define their own markup language. XML is a subset of the Standard Generalized Markup Language (SGML) and is well suited for Web transport. XML provides a unified way to describe and exchange structured data that is independent of the application or vendor.
In an embodiment of the present invention, an automatic concurrency control method is provided, as shown in fig. 1, the method includes:
step 102: configuring concurrency processing parameters in an XML file to generate a concurrency processing file script in an XML format, wherein the concurrency processing parameters comprise core thread number (corePoolSize), CPU maximum usage percentage (maxCPUPercent) and task queue size (queeSize);
step 104: receiving concurrent processing task data;
step 106: and determining the concurrency number of the threads in the dynamic agent control thread pool according to the number of the core threads, the maximum using percentage of the CPU, the size of the task queue and the concurrent processing task data, and realizing automatic control of concurrent processing.
In this embodiment of the present invention, the concurrent processing parameters further include a maximum thread number (maxmumPoolSize) and/or a time for which a thread remains active (keepAliveTime).
In this embodiment of the present invention, step 106 specifically includes:
and putting the concurrent processing task data into a task queue, acquiring a task from the task queue by a thread for executing the task, comparing the percentage of the current CPU usage with the maximum CPU usage percentage when the number of the executing threads reaches the number of the core threads and the number of the tasks in the task queue reaches the size of the task queue, creating a new thread if the percentage of the current CPU usage is less than the maximum CPU usage percentage, waiting for the executing thread to finish executing and releasing the thread, and then continuing to execute the task if the percentage of the current CPU usage is greater than the maximum CPU usage percentage.
In this embodiment of the present invention, step 106 specifically includes:
adding the created new thread number and the thread number in execution to obtain the total number of threads;
and comparing the total number of the threads with the maximum number of the threads, if the total number of the threads is less than the maximum number of the threads and the using percentage of the current CPU is less than the maximum using percentage of the CPU, continuing to create a new thread, and if the total number of the threads is greater than the maximum number of the threads or the using percentage of the current CPU is greater than the maximum using percentage of the CPU, waiting for the thread in execution to finish the execution, releasing the thread, and then continuing to execute the task.
In this embodiment of the present invention, step 106 specifically includes:
and comparing the survival time of the created new thread with the time for keeping the thread active, and if the survival time is longer than the time for keeping the thread active, canceling the created new thread.
In the embodiment of the present invention, as shown in fig. 2, the method further includes:
step 108: and after the configuration is completed, verifying the correctness of the concurrent processing file script in the XML format.
In an embodiment of the present invention, the concurrent processing parameter may further include a sleep time (sleepTime).
Sleep time action: the 24 hours each day does not have many tasks every moment, and under the condition that the tasks are very few within a period of time, sleep time (sleepTime) is set, so that the thread tasks can be in a sleep state until the sleep time is reached or the thread tasks are awakened by an application program under the condition that the tasks are available, and the utilization rate of resources is improved.
Specifically, the concurrent processing file script in the XML format is as follows:
Figure BDA0002340432760000051
the description is as follows: the following is a description of the format, see above, as well as a check.
1. Must be as follows: < config > beginning;
2. then < bean id ═ … …. >;
each bean corresponds to one configuration content for controlling concurrency, and a plurality of configurations correspond to a plurality of < bean id ═ …. "> configurations. Class in bean is the corresponding specific program execution content, and the execution content of the program is in jar packet. The value of id ═ cannot be repeated.
3. Must be as follows: < config/> End
4. When the program is started, the number of threads needs to be created first, and then the threads are placed in the pool corresponding to the bean id.
queueSize is the size of the task queue and the task to be executed is placed in the queue.
keepAliveTime is the time of existence of a thread, a thread that is more redundant than corePoolSize.
maxmumPoolSize is the maximum number of threads that cannot be exceeded during program execution.
How to use: the description is as follows:
when starting the program, it needs to create each pool with different bean id first, and the least number of threads.
If a task comes, the task is placed in the queeSize, then a thread takes one of the queeSize to execute the task, if the task comes fast, the thread executes slowly, namely, the number of the fast tasks exceeds the size of the queeSize, then at the moment, the creation of a new thread is started, before the new thread is created, whether the current CPU usage percentage is larger than maxCPercent is judged, if so, the new thread cannot be created, and if not, the new thread can be created.
If the number of created threads > maxmumPoolSize's data, then it can no longer be created.
In creating a thread, neither is > maxmumPoolSize, nor is the current percentage of CPU usage > maxCPUPPercent.
If there are no tasks, at this time, if the number of current threads > corePoolSize, the configuration time of the redundant threads, survival time > keepalivieTime, will disappear.
Based on the same inventive concept, the embodiment of the present invention further provides an automatic control concurrency device, as described in the following embodiments. Because the principle of solving the problem of the automatic control concurrency device is similar to that of the automatic control concurrency method, the implementation of the automatic control concurrency device can refer to the implementation of the automatic control concurrency method, and repeated parts are not described again. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 3 is a block diagram (one) of the automatic control concurrency device according to the embodiment of the present invention, as shown in fig. 3, including:
the configuration module 02 is used for configuring concurrency processing parameters in the XML file and generating a concurrency processing file script in an XML format, wherein the concurrency processing parameters comprise the number of core threads, the maximum use percentage of a CPU and the size of a task queue;
the receiving module 04 is configured to receive and process task data;
and the automatic control concurrency processing module 06 is used for determining the concurrency number of the threads in the dynamic agent control thread pool according to the number of the core threads, the maximum utilization percentage of the CPU, the size of the task queue and the concurrent processing task data, so as to realize automatic control concurrency processing.
In an embodiment of the present invention, the concurrent processing parameters further include a maximum thread number and/or a time for which a thread remains active.
In the embodiment of the present invention, the automatic control concurrency processing module 06 is specifically configured to:
and putting the concurrent processing task data into a task queue, acquiring a task from the task queue by a thread for executing the task, comparing the percentage of the current CPU usage with the maximum CPU usage percentage when the number of the executing threads reaches the number of the core threads and the number of the tasks in the task queue reaches the size of the task queue, creating a new thread if the percentage of the current CPU usage is less than the maximum CPU usage percentage, waiting for the executing thread to finish executing and releasing the thread, and then continuing to execute the task if the percentage of the current CPU usage is greater than the maximum CPU usage percentage.
In the embodiment of the present invention, the automatic control concurrency processing module 06 is specifically configured to:
adding the created new thread number and the thread number in execution to obtain the total number of threads;
and comparing the total number of the threads with the maximum number of the threads, if the total number of the threads is less than the maximum number of the threads and the using percentage of the current CPU is less than the maximum using percentage of the CPU, continuing to create a new thread, and if the total number of the threads is greater than the maximum number of the threads or the using percentage of the current CPU is greater than the maximum using percentage of the CPU, waiting for the thread in execution to finish the execution, releasing the thread, and then continuing to execute the task.
In the embodiment of the present invention, the automatic control concurrency processing module 06 is specifically configured to:
and comparing the survival time of the created new thread with the time for keeping the thread active, and if the survival time is longer than the time for keeping the thread active, canceling the created new thread.
In the embodiment of the present invention, as shown in fig. 4, the method further includes:
and the verification module 08 is used for verifying the correctness of the concurrent processing file script in the XML format after the configuration is finished.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the method when executing the computer program.
The embodiment of the invention also provides a computer readable storage medium, and the computer readable storage medium stores a computer program for executing the method.
In summary, the present invention realizes the dynamic agent control of the concurrency number of the threads in the thread pool according to the percentage of the CPU, thereby achieving the purpose of automatically controlling concurrency, and improving the concurrency performance and the utilization rate of system resources.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1. An automatic control concurrency method, comprising:
configuring concurrent processing parameters in an XML file to generate a concurrent processing file script in an XML format, wherein the concurrent processing parameters comprise the number of core threads, the maximum use percentage of a CPU (Central processing Unit) and the size of a task queue;
receiving concurrent processing task data;
and determining the concurrency number of the threads in the dynamic agent control thread pool according to the number of the core threads, the maximum using percentage of the CPU, the size of the task queue and the concurrent processing task data, and realizing automatic control of concurrent processing.
2. The method of automatically controlling concurrency according to claim 1, wherein said concurrency processing parameters further include a maximum number of threads and/or a time for which a thread remains active.
3. The method of automatically controlling concurrency according to claim 2, wherein determining the number of concurrency of threads in the dynamic proxy control thread pool based on the number of core threads, the maximum percentage of CPU usage, the size of the task queue, and the concurrent processing task data comprises:
and putting the concurrent processing task data into a task queue, acquiring a task from the task queue by a thread for executing the task, comparing the percentage of the current CPU usage with the maximum CPU usage percentage when the number of the executing threads reaches the number of the core threads and the number of the tasks in the task queue reaches the size of the task queue, creating a new thread if the percentage of the current CPU usage is less than the maximum CPU usage percentage, waiting for the executing thread to finish executing and releasing the thread, and then continuing to execute the task if the percentage of the current CPU usage is greater than the maximum CPU usage percentage.
4. The method of automatically controlling concurrency according to claim 3, wherein determining the number of concurrency of threads in the dynamic proxy control thread pool based on the number of core threads, the maximum percentage of CPU usage, the size of the task queue, and the concurrent processing task data, further comprises:
adding the created new thread number and the thread number in execution to obtain the total number of threads;
and comparing the total number of the threads with the maximum number of the threads, if the total number of the threads is less than the maximum number of the threads and the using percentage of the current CPU is less than the maximum using percentage of the CPU, continuing to create a new thread, and if the total number of the threads is greater than the maximum number of the threads or the using percentage of the current CPU is greater than the maximum using percentage of the CPU, waiting for the thread in execution to finish the execution, releasing the thread, and then continuing to execute the task.
5. The method of automatically controlling concurrency of claim 4, wherein determining the number of concurrency of threads in the dynamic proxy control thread pool based on the number of core threads, the maximum percentage of CPU usage, the size of the task queue, and the concurrent processing task data, further comprises:
and comparing the survival time of the created new thread with the time for keeping the thread active, and if the survival time is longer than the time for keeping the thread active, canceling the created new thread.
6. The automatic control concurrency method according to claim 1, further comprising:
and after the configuration is completed, verifying the correctness of the concurrent processing file script in the XML format.
7. An automatically controlled concurrency device, comprising:
the configuration module is used for configuring concurrent processing parameters in the XML file and generating a concurrent processing file script in an XML format, wherein the concurrent processing parameters comprise the number of core threads, the maximum use percentage of a CPU and the size of a task queue;
the receiving module is used for receiving and processing the task data;
and the automatic control concurrency processing module is used for determining the concurrency number of the threads in the dynamic agent control thread pool according to the number of the core threads, the maximum use percentage of the CPU, the size of the task queue and the concurrent processing task data, so as to realize automatic control concurrency processing.
8. The apparatus according to claim 7, wherein the concurrency handling parameters further comprise a maximum number of threads and/or a time for which a thread remains active.
9. The automatic control concurrency device according to claim 8, wherein the automatic control concurrency processing module is specifically configured to:
and putting the concurrent processing task data into a task queue, acquiring a task from the task queue by a thread for executing the task, comparing the percentage of the current CPU usage with the maximum CPU usage percentage when the number of the executing threads reaches the number of the core threads and the number of the tasks in the task queue reaches the size of the task queue, creating a new thread if the percentage of the current CPU usage is less than the maximum CPU usage percentage, waiting for the executing thread to finish executing and releasing the thread, and then continuing to execute the task if the percentage of the current CPU usage is greater than the maximum CPU usage percentage.
10. The automatic control concurrency device according to claim 9, wherein the automatic control concurrency processing module is specifically configured to:
adding the created new thread number and the thread number in execution to obtain the total number of threads;
and comparing the total number of the threads with the maximum number of the threads, if the total number of the threads is less than the maximum number of the threads and the using percentage of the current CPU is less than the maximum using percentage of the CPU, continuing to create a new thread, and if the total number of the threads is greater than the maximum number of the threads or the using percentage of the current CPU is greater than the maximum using percentage of the CPU, waiting for the thread in execution to finish the execution, releasing the thread, and then continuing to execute the task.
11. The automatic control concurrency device according to claim 10, wherein the automatic control concurrency processing module is specifically configured to:
and comparing the survival time of the created new thread with the time for keeping the thread active, and if the survival time is longer than the time for keeping the thread active, canceling the created new thread.
12. The automatically controlled concurrency device of claim 7, further comprising:
and the verification module is used for verifying the correctness of the concurrent processing file script in the XML format after the configuration is finished.
13. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1 to 6 when executing the computer program.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program for executing the method of any one of claims 1 to 6.
CN201911374018.7A 2019-12-27 2019-12-27 Automatic control concurrency method and device Pending CN111142943A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911374018.7A CN111142943A (en) 2019-12-27 2019-12-27 Automatic control concurrency method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911374018.7A CN111142943A (en) 2019-12-27 2019-12-27 Automatic control concurrency method and device

Publications (1)

Publication Number Publication Date
CN111142943A true CN111142943A (en) 2020-05-12

Family

ID=70520836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911374018.7A Pending CN111142943A (en) 2019-12-27 2019-12-27 Automatic control concurrency method and device

Country Status (1)

Country Link
CN (1) CN111142943A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858055A (en) * 2020-07-23 2020-10-30 平安普惠企业管理有限公司 Task processing method, server and storage medium
CN112000455A (en) * 2020-09-10 2020-11-27 华云数据控股集团有限公司 Multithreading task processing method and device and electronic equipment
CN112433837A (en) * 2020-11-24 2021-03-02 上海浦东发展银行股份有限公司 Asynchronous resource processing method and system based on enterprise service bus
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN117294347A (en) * 2023-11-24 2023-12-26 成都本原星通科技有限公司 Satellite signal receiving and processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130103956A1 (en) * 2011-10-25 2013-04-25 Fujitsu Limited Method for controlling mobile terminal device, medium for storing control program, and mobile terminal device
CN105630606A (en) * 2015-12-22 2016-06-01 山东中创软件工程股份有限公司 Method and device for adjusting capacity of thread pools
CN107220033A (en) * 2017-07-05 2017-09-29 百度在线网络技术(北京)有限公司 Method and apparatus for controlling thread pool thread quantity
CN110275760A (en) * 2019-06-27 2019-09-24 深圳市网心科技有限公司 Process based on fictitious host computer processor hangs up method and its relevant device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130103956A1 (en) * 2011-10-25 2013-04-25 Fujitsu Limited Method for controlling mobile terminal device, medium for storing control program, and mobile terminal device
CN105630606A (en) * 2015-12-22 2016-06-01 山东中创软件工程股份有限公司 Method and device for adjusting capacity of thread pools
CN107220033A (en) * 2017-07-05 2017-09-29 百度在线网络技术(北京)有限公司 Method and apparatus for controlling thread pool thread quantity
CN110275760A (en) * 2019-06-27 2019-09-24 深圳市网心科技有限公司 Process based on fictitious host computer processor hangs up method and its relevant device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858055A (en) * 2020-07-23 2020-10-30 平安普惠企业管理有限公司 Task processing method, server and storage medium
CN111858055B (en) * 2020-07-23 2023-02-03 平安普惠企业管理有限公司 Task processing method, server and storage medium
CN112000455A (en) * 2020-09-10 2020-11-27 华云数据控股集团有限公司 Multithreading task processing method and device and electronic equipment
CN112433837A (en) * 2020-11-24 2021-03-02 上海浦东发展银行股份有限公司 Asynchronous resource processing method and system based on enterprise service bus
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN117294347A (en) * 2023-11-24 2023-12-26 成都本原星通科技有限公司 Satellite signal receiving and processing method
CN117294347B (en) * 2023-11-24 2024-01-30 成都本原星通科技有限公司 Satellite signal receiving and processing method

Similar Documents

Publication Publication Date Title
CN111142943A (en) Automatic control concurrency method and device
US8621475B2 (en) Responsive task scheduling in cooperative multi-tasking environments
KR101953906B1 (en) Apparatus for scheduling task
US10146588B2 (en) Method and apparatus for processing computational task having multiple subflows
CN109144710B (en) Resource scheduling method, device and computer readable storage medium
CN102360309B (en) Scheduling system and scheduling execution method of multi-core heterogeneous system on chip
US9396028B2 (en) Scheduling workloads and making provision decisions of computer resources in a computing environment
JP2829078B2 (en) Process distribution method
CN103677999A (en) Management of resources within a computing environment
CN108027751A (en) Efficient scheduling to multi version task
CN113504985B (en) Task processing method and network equipment
US20060265712A1 (en) Methods for supporting intra-document parallelism in XSLT processing on devices with multiple processors
CN108595282A (en) A kind of implementation method of high concurrent message queue
KR101653204B1 (en) System and method of dynamically task managing for data parallel processing on multi-core system
CN107851039A (en) System and method for resource management
CN106557369A (en) A kind of management method and system of multithreading
US20200110634A1 (en) Managing Task Dependency
Margiolas et al. Portable and transparent software managed scheduling on accelerators for fair resource sharing
CN111459622B (en) Method, device, computer equipment and storage medium for scheduling virtual CPU
EP1693743A2 (en) System, method and medium for using and/or providing operating system information to acquire a hybrid user/operating system lock
US9229716B2 (en) Time-based task priority boost management using boost register values
KR20130051076A (en) Method and apparatus for scheduling application program
TW200905567A (en) Notifying user mode scheduler of blocking events
CN115964150A (en) Business processing method, system, device and medium based on double real-time kernels
WO2004095271A2 (en) Dynamic distributed make

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200512