CN115168012A - Thread pool concurrent thread number determining method and related product - Google Patents

Thread pool concurrent thread number determining method and related product Download PDF

Info

Publication number
CN115168012A
CN115168012A CN202210890624.XA CN202210890624A CN115168012A CN 115168012 A CN115168012 A CN 115168012A CN 202210890624 A CN202210890624 A CN 202210890624A CN 115168012 A CN115168012 A CN 115168012A
Authority
CN
China
Prior art keywords
target
thread
determining
thread pool
data source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210890624.XA
Other languages
Chinese (zh)
Inventor
练刚
欧阳张鹏
赵彦晖
耿心伟
曾源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weizhong Credit Technology Co ltd
Original Assignee
Shenzhen Weizhong Credit Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weizhong Credit Technology Co ltd filed Critical Shenzhen Weizhong Credit Technology Co ltd
Priority to CN202210890624.XA priority Critical patent/CN115168012A/en
Publication of CN115168012A publication Critical patent/CN115168012A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

The application provides a thread pool concurrent thread number determining method and a related product, wherein the method comprises the following steps: receiving a target task request, and determining to store the target task request to a target task queue of N task queues according to a target loan service process used for requesting processing of the target task request; and determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan business process and the target queue information of the target task queue. By adopting the method of the embodiment of the application, the target thread pool is determined through the target loan service process requested to be processed by the target task request, and the corresponding target concurrent thread number in the target thread pool is determined through the target queue information of the target task queue, so that the service isolation among different loan service processes is realized, and the concurrent thread numbers of different loan service processes can be controlled according to the queue information of the task queue.

Description

Thread pool concurrent thread number determination method and related product
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method for determining the number of concurrent threads in a thread pool and a related product.
Background
With the increasingly active market operations, financial loan transactions conditioned on repayment and payment have been explosively increased in number in order to promote further economic development to meet the financial demand of the market for increased productivity, and accordingly, the number of financial loan job requests to be processed by financial institution systems has entered an unprecedented growth.
Currently, in order to process and respond to a task request for a financial loan transaction timely and quickly, a financial institution generally processes a large number of task requests in a combined manner of a thread pool which has a short task request completion time and can bear a large number of sudden task requests, and a task queue which provides a buffer mechanism for a financial institution server. However, in this process, due to the complexity of the flow of the financial loan transaction and the huge number and scale thereof, if the thread pool still conventionally processes a plurality of different processing links of the financial loan transaction at the same time, it may cause the financial server to malfunction or crash.
Disclosure of Invention
The embodiment of the application provides a thread pool concurrent thread number determining method and a related product, and by implementing the embodiment of the application, a target thread pool corresponding to a target task request and a target concurrent thread number corresponding to the target thread pool are determined, so that the loan business process is executed efficiently without redundancy.
In a first aspect, an embodiment of the present application provides a method for determining a number of concurrent threads in a thread pool, where the method is applied to a financial server, where the financial server includes N task queues and M thread pools, and the method includes:
receiving a target task request, and determining to store the target task request to a target task queue of N task queues according to a target loan service process used for requesting processing of the target task request;
and determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan service process and the target queue information of the target task queue.
In a second aspect, an embodiment of the present application provides a device for determining the number of concurrent threads in a thread pool, where the device is applied in a financial server, where the financial server includes N task queues and M thread pools, and the device includes:
the receiving unit is used for receiving the target task request, and determining to store the target task request into a target task queue of the N task queues according to a target loan service process which is used for requesting processing by the target task request;
and the determining unit is used for determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan business process and the target queue information of the target task queue.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and computer executable instructions stored on the memory and executable on the processor, and when the computer executable instructions are executed, the electronic device is caused to perform some or all of the steps described in any one of the methods of the first aspect of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon computer instructions, which, when executed on a communication apparatus, cause the communication apparatus to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a computer program operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, a target task request is received, and the target task request is determined to be stored in a target task queue of N task queues according to a target loan service process used for requesting processing of the target task request; and determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan service process and the target queue information of the target task queue. By adopting the method of the embodiment of the application, the target thread pool is determined through the target loan service process requested to be processed by the target task request, and the corresponding target concurrent thread number in the target thread pool is determined through the target queue information of the target task queue, so that the service isolation among different loan service processes is realized, the concurrent thread numbers of different loan service processes can be controlled according to the queue information of the task queue, and the loan service process can be executed efficiently without redundancy.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of a financial server according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for determining a concurrent thread count of a thread pool according to an embodiment of the present application;
fig. 3 is an exemplary schematic diagram of a method for determining a number of concurrent threads in a thread pool according to an embodiment of the present application;
fig. 4 is an exemplary schematic diagram of another method for determining the number of concurrent threads in a thread pool according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a thread pool concurrent thread number determining apparatus according to an embodiment of the present application;
fig. 6 is a schematic server structure diagram of a hardware operating environment of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The following describes an application scenario related to an embodiment of the present application with reference to the drawings.
Fig. 1 is a schematic structural diagram of a financial server according to an embodiment of the present disclosure. As shown in fig. 1, the financial server includes a task queue and a thread pool, and a first end of the financial server is connected to a financial institution terminal. The task queue is used for storing a plurality of task requests initiated by the financial institution terminal to the financial server, and the thread pool is used for creating a corresponding number of concurrent threads to process the task requests in execution. The method comprises the steps that a financial server enters the tail of a task queue from a new task request newly received by a financial institution terminal to complete enqueue, a task request to be executed at the head of the task queue waits until a corresponding thread pool completes an executing task request, namely, when the executing task request does not exist in the corresponding thread pool, the task request leaves the task queue from the head of the task queue to complete dequeuing and automatically enters the corresponding thread pool to execute processing, meanwhile, the new task request moves forwards in the direction of the head of the task queue in the task queue to improve the execution sequence, and so on, the response work of the financial server to a plurality of task requests from the financial institution terminal is completed through the cooperation of the task queue and the thread pool.
Based on this, an embodiment of the present application provides a method for determining the number of concurrent threads in a thread pool, which is applied to a financial server, where the financial server includes N task queues and M thread pools, please refer to fig. 2, fig. 2 is a schematic flowchart of the method for determining the number of concurrent threads in a thread pool provided in the embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
101: and receiving a target task request, and determining to store the target task request into a target task queue of the N task queues according to a target loan service process used for requesting processing of the target task request.
The financial server is used for being in communication connection with a financial institution terminal of a financial institution, receiving a task request initiated by the financial institution terminal for requesting to process a loan service process, and sending an execution result back to the financial institution terminal after the task request is completed.
The task queue is used for receiving and storing a plurality of task requests initiated by the financial institution terminal to the server, when a thread pool corresponding to a certain task request in the plurality of task requests executes the last task request, the task request is sent to the corresponding thread pool to be executed, and meanwhile, task requests which are not executed in other task requests are stored, so that a buffer mechanism is provided for the financial server, and the financial server is prevented from being crashed due to the fact that a large number of task requests are gushed into the financial server in a short time.
In a specific implementation, the task requests in the task queue may also correspond to a priority ranking for executing processing, and the priority ranking may be determined according to the sequence of the enqueuing times of the task requests, that is, the earlier the enqueuing time is, the closer the task request is to the head of the task queue, the higher the priority is. Furthermore, the priority of the task requests can be adjusted by adjusting the position sequence of the task requests in the task queue, so that the execution sequence of the task requests can be adjusted.
In a specific implementation, a corresponding relationship exists between a target loan service process used for requesting processing of the target task request and the target task queue.
The thread pool is a multi-thread processing form, and task requests can be added into a task queue in the processing process, and then the task requests are automatically started after the threads are created. In specific implementation, the number of threads can be flexibly controlled according to the requirements of the system and the hardware environment by using the thread pool, and all threads can be uniformly managed and controlled, so that the operating efficiency of the system is improved, and the operating pressure of the system is reduced.
The loan transaction process may include a data source collection process, a data cleaning process, a data source processing process, a data query process, and the like.
102: and determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan service process and the target queue information of the target task queue.
In a specific implementation, the target thread pool in the M thread pools corresponding to the target loan service process may be one or more thread pools in the M thread pools.
In a specific implementation, the target queue information of the target task queue may include information such as the number of task requests and the type of task requests included in the target task queue.
The target concurrent thread number refers to the thread number which is in an execution state simultaneously and is used for one or more task requests in the target thread pool.
In a specific implementation, in order to avoid a multi-thread resource conflict caused by multiple task requests, the multiple threads of the target concurrent thread may be configured as a target loan service process only for executing the target task request for request processing, that is, when the target task request enters the target thread pool, the multiple threads of the target concurrent thread are created by the target thread pool to complete the execution processing work of the target task request, and after the target task request is executed, the multiple threads of the target concurrent thread in the target thread pool are destroyed to avoid a resource conflict between the threads and a new thread created corresponding to a next task request entering the target thread pool.
Exemplarily, please refer to fig. 3, fig. 3 is an exemplary schematic diagram of a thread pool concurrent thread number determining method provided in an embodiment of the present application, and as shown in fig. 3, the financial server includes N task queues (task queue 1, task queue 2 \8230; task queue N) and M thread pools (thread pool 1, thread pool 2 \8230; \ 8230; thread pool M), the financial server receives the target task request 1, and the target loan transaction process of the target task request 1 for request processing is a data source acquisition process for the data source 1, so that the financial server determines to store the target task request 1 to the task queue 1 in the N task queues, and the data source acquisition process of the data source 1 corresponds to the thread pool 1 in the M thread pools, and thus, the financial server determines the corresponding target concurrent thread number X in the thread pool 1 according to the target queue information of the task queue 1. After determining the thread pool 1 corresponding to the target task request 1 and the target concurrent thread number X corresponding to the thread pool 1, the financial server sends the target task request 1 in the task queue 1 to the thread pool 1, and invokes X concurrent threads in the thread pool 1 to perform concurrent execution processing on the target task request 1 to complete acquisition of the data source 1.
It can be seen that, in the embodiment of the application, a target task request is received, and the target task request is determined to be stored in a target task queue of N task queues according to a target loan service process used for requesting processing of the target task request; and determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan business process and the target queue information of the target task queue. By adopting the method of the embodiment of the application, the target thread pool is determined through the target loan service process requested to be processed by the target task request, and the corresponding target concurrent thread number in the target thread pool is determined through the target queue information of the target task queue, so that the service isolation among different loan service processes is realized, the concurrent thread numbers of different loan service processes can be controlled according to the queue information of the task queue, and the loan service process can be executed efficiently without redundancy.
In some application scenarios, due to the fact that loan business processes are various in types, if different loan business processes are sent to the same thread pool without distinction for execution, adverse conditions which may cause system disorder or system errors may occur, therefore, in order to isolate the acquisition processing of different data sources and the process processing of different loan processing links, the loan business processes may be divided into data source acquisition processes and loan data processing processes, and different data sources are corresponding to respective thread pools, and different loan data processing links are corresponding to respective thread pools, so that adverse phenomena of interference or conflict of different loan business processes are avoided, and the system stability of the financial server is ensured. Therefore, an embodiment of the present application provides another method for determining the number of concurrent threads in a thread pool, which includes:
in one possible example, the target loan service process includes a data source collecting process and a loan data processing process, and the above-mentioned target thread pool in the M thread pools corresponding to the target loan service process includes:
if the target loan service process is a data source acquisition process, determining a target thread pool in the M thread pools according to a target data source corresponding to the data source acquisition process;
and if the target loan service flow is a loan data processing flow, determining a target thread pool in the M thread pools according to a target processing link corresponding to the loan data processing flow.
In specific implementation, the target data source may include data sources in aspects of tax, industry and commerce, judicial law, invoice, intellectual property, power consumption, social security, and the like.
In a specific implementation, the target thread pool in the M thread pools is determined according to a target data source corresponding to a data source acquisition process, and may be one or more thread pools corresponding to the data source in the M thread pools, so that each thread pool is dedicated to executing the data source acquisition process corresponding to the data source.
The target processing link corresponding to the loan data processing flow may include, in a specific implementation: the data source cleaning process is used for cleaning the data source obtained in the data source acquisition process after the data source acquisition process so as to obtain the data source according with the data quality; the data source processing flow is used for performing index processing on the data source which accords with the data quality after the data source cleaning flow to obtain index data; and the data query process is used for querying the data source and/or index data which accord with the data quality after the data source cleaning process and/or the data source processing process.
In a specific implementation, one processing link corresponds to one or more thread pools in the M thread pools, so that each thread pool is specially used for executing the loan data processing flow corresponding to the processing link.
For example, assuming that there are 20 thread pools in the financial server, please refer to fig. 4, fig. 4 is an exemplary schematic diagram of another method for determining the number of concurrent threads of a thread pool provided in this embodiment of the present application, as shown in fig. 4, thread pools 1-10 are used for processing a data source acquisition process, thread pools 11-20 are used for processing a loan data processing process, thread pool 1 is used for processing a data source acquisition process of data source 1, thread pool 2 is used for processing a data source acquisition process of data source 2, \\ 82303030303030, thread pool 10 is used for a data source acquisition process of data source 10, thread pool 11 is used for processing a loan data processing process of link 1, thread pool 12 is used for processing an 823030of link 2, loan data processing process is used, thread pool 20 is used for processing a loan data processing process of data source 10, and therefore, if a received target loan task request is used for requesting processing a data source acquisition process of data source 1, the financial server calls a number of concurrent threads in a target thread pool for determining the number of concurrent threads according to the target loan task information of the target task queue stored in the financial server, and then calls the target thread pool and requests the target thread pool for the target data acquisition process.
It can be seen that, in the embodiment of the present application, when the target loan service flow is a data source acquisition flow, a target thread pool of the M thread pools is determined according to a target data source corresponding to the data source acquisition flow, and when the target loan service flow is a loan data processing flow, a target thread pool of the M thread pools is determined according to a target processing link corresponding to the loan data processing flow, and different processing links of different data source acquisition flows and loan data processing flows are allocated to corresponding data pools, so that isolation of each data source and each processing link of the loan data processing flow is achieved, different task requests are not affected by each other in an execution environment, and system stability when the thread pool processes the task requests is improved.
In some application scenarios, the acquisition requirements of different data sources are different under different time nodes, and if the key time node of the data source can be determined by combining the data characteristics and the acquisition characteristics of the data source, and the number of concurrent threads included in a thread pool for processing the data source acquisition is increased when the key time node corresponds to the data source, the adverse situation that a task queue is blocked due to excessive task requests acquired by the data source at a specific time node can be undoubtedly avoided. Therefore, an embodiment of the present application provides another method for determining the number of concurrent threads in a thread pool, which includes:
in one possible example, when the target loan transaction process is a data source collection process, the method further includes:
acquiring a current time node, and determining whether the current time node belongs to a key time node of a target data source;
and if the current time node belongs to the key time node of the target data source, increasing the number of concurrent threads in the target thread pool.
In specific implementation, the key time node of the target data source may be a time node where the target data source fluctuates to a large extent, or a time node where the target data source is collected in a large amount. For example, if the target data source is patent data, most enterprises will make a patent layout year index at the beginning of a year, and therefore most enterprises apply a greater number of patent applications at the end of the year, that is, the time node, in order to complete the year index, based on the above experience, the corresponding time node of the target data source is patent data may be 12 months or other end of year time points, and similarly, the time nodes of other data sources may also be determined in the above manner.
If the current time node belongs to the key time node of the target data source, the number of concurrent threads in the target thread pool is increased, in a specific implementation, the number of concurrent threads may be increased according to a preset increase when the current time node belongs to the key time node of the target data source, or the increase of the number of concurrent threads may be determined according to the time difference between the current time node and the key time node.
It can be seen that, in the embodiment of the application, when the target loan service process is a data source acquisition process and whether the current time node belongs to a key time node of a target data source, the number of concurrent threads in the target thread pool is increased, so that for a key time point at which the target data source fluctuates to a greater extent or is acquired in a large amount, more concurrent threads can be provided to acquire the data source, it is ensured that more task requests cannot cause blockage of a task queue, smooth execution of the target loan service process is further ensured, and execution stability of a financial server is improved.
In one possible example, the method further includes:
determining a target data source group according to the data source group to which the target data source belongs, wherein the data source group is divided according to loan risk dimensions, and the data source group and the key time node have a mapping relation;
the determining whether the current time node belongs to the key time node of the target data source includes:
and determining whether the current time node belongs to a key time node corresponding to the target data source group.
In some application scenarios, for a data source having a critical time node, the closer to the critical time node, the larger the number of task requests corresponding to a data source acquisition flow, that is, the time difference between the number of task requests for processing the data source acquisition flow and the critical time is usually in a negative correlation. Based on this, in order to ensure the execution processing capacity of the thread pool while saving thread resources, different gain processing may be performed on the corresponding number of concurrent threads in the thread pool in combination with the size of the time difference between the current time node and the critical time node. Therefore, an embodiment of the present application provides another method for determining the number of concurrent threads in a thread pool, which includes:
in one possible example, the method further includes:
if the current time node is located within a preset time range of a key time node corresponding to the target data source group, acquiring a target time difference between the current time node and the key time node;
determining a thread number gain coefficient according to the target time difference and a preset time range;
the determining the number of the corresponding target concurrent threads in the target thread pool according to the target queue information of the target task queue includes:
and determining the corresponding target concurrent thread number in the target thread pool according to the target queue information of the target task queue and the thread number gain coefficient.
Illustratively, the data source groups comprise tax data source groups, industrial and commercial data source groups, judicial data source groups and intellectual property data source groups according to the loan risk dimension. The tax data source group comprises data sources such as value-added tax data, business tax data, urban construction tax data, house property tax data and the like; the industrial and commercial data source group comprises data sources such as operation address data, establishment date data, production and operation range data, staff number data and the like of individual households or enterprises; a judicial data source group, which comprises data sources such as litigation case data and blacklist data; the intellectual property data source group comprises data sources such as patent data, trademark data, copyright data and the like.
In a specific implementation, the mapping relationship between the data source group and the key time node may be determined according to the data characteristics of the data source group. Illustratively, most enterprises make an annual index of an intellectual property layout in the year, so that most enterprises apply a greater number of intellectual property applications at the end of the year, which is the time node in the year, in order to complete the annual index, based on the above experience, the key time node corresponding to the intellectual property data source group may be 12 months or other end of the year time points, and similarly, other data source groups may also be determined in the above manner, so as to finally determine the mapping relationship between the data source group and the key time node.
The preset time range may be, in a specific implementation, the first 15 days, the first 1 month, or another time range of the key time node.
In a specific implementation, as the current time node is closer to the key time node, the number of data source acquisition processes corresponding to the target data source is inevitably increased, and thus a target thread pool for acquiring the target data source should have more concurrent thread numbers, in order to ensure that the financial server also has stable execution capacity in a busy task period, the smaller the target time difference is relative to the preset time range, the larger the gain coefficient of the determined thread numbers is; conversely, in the same way, the larger the target time difference is relative to the preset time range, the smaller the gain coefficient of the determined thread number is.
For example, within the first 15 days of the key time node in the preset time range, the target time difference 1 is the first 3 days of the key time node, and the target time difference 2 is the first 7 days of the key time node, because the target time difference 1 is smaller than the preset time range, the thread number gain coefficient 1 corresponding to the target time difference 1 is larger than the thread number gain coefficient 2 corresponding to the target time difference 2, and then the target thread pool 1 corresponding to the target time difference 1 has more concurrent thread numbers after the thread number gain.
In a specific implementation, the thread number gain coefficient may be expressed in a numerical form in an interval of 0 to 1, or in a percentage form in an interval of 0 to 100%.
In a specific implementation, assuming that an initial concurrent thread number corresponds to a target thread pool determined according to the target queue information of the target task queue, if the thread number gain coefficient is represented in a numerical form within an interval of 0 to 1, the number of concurrent threads corresponding to the target thread pool may be: target number of concurrent threads = (1 + number of threads gain coefficient) × initial number of concurrent threads; if the gain factor of the number of threads is expressed in a percentage form within the interval of 0-100%, it can be: target number of concurrent threads = (100% + thread number gain factor) = initial number of concurrent threads.
Exemplarily, assuming that the current time node is 11 months and 15 days, the preset time range is within the previous 1 month of the key time node, and 100 initial concurrent threads are correspondingly arranged in the target thread pool determined according to the target queue information of the target task queue, and the thread number gain coefficient is represented by a numerical form within an interval range of 0-1. The target loan service process is a data source collection process for collecting patent data, that is, a target data source is patent data, it can be seen that a data source group to which the patent data belongs is an intellectual property data source group, that is, the target data source group is an intellectual property data source group, it is determined that a key time node corresponding to the intellectual property data source group is 12 months, a target time difference between a current time node and the key time node is 15 days, thereby determining a thread number gain coefficient of 0.5 according to the target time difference and a preset time range, and finally determining a corresponding target concurrent thread number in a target thread pool according to target queue information of a target task queue and the thread number gain coefficient, where the target concurrent thread number = (1 + thread number gain coefficient) = initial concurrent thread number = (1 + 0.5) = 100= 150. Therefore, the target concurrent thread number is gained, and therefore the target data source acquisition process has more abundant execution capacity.
It can be seen that, in the embodiment of the present application, when a critical time node that needs to invoke a large number of concurrent threads to acquire a target data source in a target data source group is approached, the number of corresponding target concurrent threads in a target thread pool is determined jointly by combining target queue information of a target task queue and a target time difference between a current time node and the critical time node. Therefore, more concurrent threads can be used for collecting the target data sources in the target data source group, the task queue is prevented from being blocked by more task requests, the smooth execution of the target loan business process is further guaranteed, the execution stability of the financial server is improved, and meanwhile, the process for determining the corresponding target concurrent threads in the target thread pool is more intelligent and flexible.
In some application scenarios, if the number of a certain type of loan business processes included in the task queue is large, and the number of associated business processes located in a previous step of the loan business process in the execution step sequence is also large, it is described that a large number of task requests initiated for the certain type of loan business processes are to be executed next. Therefore, an embodiment of the present application provides another method for determining the number of concurrent threads in a thread pool, which includes:
in one possible example, the determining, according to the target queue information of the target task queue, the number of corresponding target concurrent threads in the target thread pool includes:
determining a first quantity value according to the quantity of the business processes which are contained in the target task queue and are the same as the target loan business processes;
determining a second numerical value according to the quantity of business processes related to the target loan business processes and included in the target task queue;
and determining the corresponding target concurrent thread number in the target thread pool according to the first numerical value and the second numerical value.
In one possible example, the method further includes:
acquiring first receiving time corresponding to a service flow which is the same as a target loan service flow and is included in a target task queue, and determining first receiving frequency according to the first receiving time;
acquiring second receiving time corresponding to a business process associated with the target loan business process and included in the target task queue, and determining second receiving frequency according to the second receiving time;
determining a first weight and a second weight according to the first receiving frequency and the second receiving frequency, wherein the magnitude relation of the first weight and the second weight and the magnitude relation of the first receiving frequency and the second receiving frequency are in positive correlation;
the determining the corresponding target concurrent thread number in the target thread pool according to the first numerical value and the second numerical value includes:
and determining the corresponding target concurrent thread number in the target thread pool according to the first number value, the first weight, the second number value and the second weight.
In a specific implementation, the business process associated with the target loan business process may be determined by the order of execution steps corresponding to the target loan business process. For example, if the target loan transaction process is a data source cleaning process, and in the loan transaction process, the data source cleaning process is usually performed on the data source after the data source is obtained through the data source collection process, therefore, if the target loan transaction process is the data source collection process, the associated transaction process may be the data source cleaning process.
In a specific implementation, the first quantity value may be in a positive correlation with the quantity of the business processes, which are included in the target task queue and are the same as the target loan business processes; similarly, the second quantity value may, in a specific implementation, be in positive correlation with the quantity of the business process associated with the target loan business process included in the target task queue.
In a specific implementation, a plurality of first receiving times corresponding to a plurality of service flows identical to the target loan service flow may be obtained, and the first receiving frequency may be determined by the plurality of first receiving times. For example, the first receiving frequency may be calculated by obtaining the first receiving time corresponding to the first received business process identical to the target loan business process, the last received first receiving time corresponding to the last received business process identical to the target loan business process, and the number of business processes identical to the target loan business process, which are also saved in the target task queue. Similarly, the second receiving frequency may also be determined in the above manner, and is not described herein again.
In a specific implementation, as the receiving frequency of a certain business process is higher, it indicates that the task request of the business process is rapidly rushing into the financial server, that is, the business process will occupy a large amount of concurrent threads next, so that the concurrent threads number should depend on the business process with the higher receiving frequency, and the execution processing process of the task request by the financial server can be ensured to be stably performed. Therefore, the magnitude relationship between the first weight and the second weight and the magnitude relationship between the first receiving frequency and the second receiving frequency are in a positive correlation.
In a specific implementation, the magnitude relationship between the first weight and the second weight corresponds to the magnitude relationship between the first receiving frequency and the second receiving frequency. Exemplarily, if the first receiving frequency is greater than the second receiving frequency, the first weight is greater than the second weight; otherwise, if the first receiving frequency is smaller than the second receiving frequency, the first weight is smaller than the second weight.
Determining a corresponding target concurrent thread number in the target thread pool according to the first number value, the first weight, the second number value and the second weight, wherein in a specific implementation, the determining may be: the target concurrent number = the first value + the first weight + the second value + the second weight, or may be calculated in other combinations.
It can be seen that, in the embodiment of the present application, the number of the corresponding target concurrent threads in the target thread pool is finally determined through the service flows, which are included in the target task queue, identical to the target loan service flow and the receiving frequency of the service flows, which are identical to the target loan service flow and associated with the target loan service flow, so that the number of the corresponding target concurrent threads in the target thread pool is more dependent on the service flows, which have a larger number and a higher receiving frequency, in the target task queue, and the determination process for determining the number of the corresponding target concurrent threads in the target thread pool according to the target queue information of the target task queue has more intelligence and flexibility.
In one possible example, the method further includes:
determining the memory occupancy rate corresponding to the financial server;
when the memory occupancy rate is greater than a first preset occupancy rate, determining an idle thread pool, wherein the idle thread pool is a thread pool with a concurrent thread occupancy rate less than a second preset occupancy rate;
and acquiring a first target thread pool with the minimum number of concurrent threads, and corresponding the target loan service process of the first target thread pool to an idle thread pool.
The memory occupancy rate refers to the memory overhead of the financial server when processing all task requests entering the thread pool, and when the memory occupancy rate is too high, the overall operation performance of the whole financial server when processing each task request can be affected.
The concurrent thread occupancy rate refers to the proportion of the concurrent thread number in the concurrent execution state in the thread pool in the total thread number of the thread pool.
The first preset occupancy rate may be 70%, 80% or other occupancy rates in specific implementations; the second preset occupancy may be 30%, 40% or other occupancy in specific implementations.
In specific implementation, the idle thread pool is determined as the thread pool for executing the target loan service process of the first target thread pool, so that the thread utilization rate of the idle thread pool is improved.
It can be seen that, in the embodiment of the present application, when the memory occupancy rate corresponding to the financial server is greater than the first preset occupancy rate, the idle thread pool whose concurrent thread occupancy rate is less than the second preset occupancy rate is determined, and the target loan service flow of the first target thread pool with the minimum concurrent thread number is corresponding to the idle thread pool, and the idle thread in the idle thread pool is used to execute processing on the target loan service flow of the first target thread pool with the minimum concurrent thread number, so that the thread utilization rate in the idle thread pool is improved, and the execution processing efficiency of the financial server on each loan service flow is improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a device for determining the number of concurrent threads in a thread pool according to an embodiment of the present application, as shown in fig. 5:
a thread pool concurrent thread number determining device is applied to a financial server, the financial server comprises N task queues and M thread pools, the device comprises:
the receiving unit 201 is configured to receive a target task request, and determine to store the target task request in a target task queue of the N task queues according to a target loan service process for requesting processing of the target task request;
the determining unit 202 is configured to determine, according to a target thread pool of the M thread pools corresponding to the target loan business process, a corresponding target concurrent thread number in the target thread pool according to the target queue information of the target task queue.
It can be seen that, in the apparatus provided in the embodiment of the present application, the receiving unit receives the target task request, and determines to store the target task request to a target task queue of the N task queues according to a target loan service process for requesting processing of the target task request; the determining unit determines a corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan business process and the target queue information of the target task queue. By adopting the device of the embodiment of the application, the target thread pool is determined through the target loan service process requested to be processed by the target task request, and the corresponding target concurrent thread number in the target thread pool is determined through the target queue information of the target task queue, so that the service isolation among different loan service processes is realized, the concurrent thread numbers of different loan service processes can be controlled according to the queue information of the task queue, and the loan service process can be executed efficiently without redundancy.
Specifically, in the embodiment of the present application, the function unit may be divided according to the method example for the thread pool concurrent thread number determining apparatus, for example, each function unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit. It should be noted that, in the embodiment of the present application, the division of the unit is schematic, and is only one logic function division, and when the actual implementation is realized, another division manner may be provided.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units described above, if implemented in the form of software program modules and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
Consistent with the embodiment shown in fig. 2, in an embodiment of the present application, referring to fig. 6, fig. 6 is a schematic diagram of a server structure of a hardware operating environment of an electronic device provided in the embodiment of the present application, as shown in fig. 6, the electronic device includes a processor, a memory, and a computer execution instruction stored in the memory and executable on the processor, and when the computer execution instruction is executed, the electronic device executes an instruction including the steps of the method for determining the number of concurrent threads in any thread pool.
The processor is a CPU (Central Processing Unit).
The memory may be a high-speed RAM memory, or may be a stable memory, such as a disk memory.
Those skilled in the art will appreciate that the configuration of the server shown in fig. 6 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 6, the memory may include computer-executable instructions for an operating system, a network communication module, and a thread pool concurrent thread count determination method. The operating system is used for managing and controlling hardware and software resources of the server and supporting the running of computer execution instructions. The network communication module is used for realizing communication between each component in the memory and communication with other hardware and software in the server, and the communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), etc.
In the server shown in fig. 6, the processor is configured to execute computer-executable instructions for personnel management stored in the memory, and to implement the following steps: receiving a target task request, and determining to store the target task request to a target task queue of N task queues according to a target loan business process used for requesting processing of the target task request; and determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan business process and the target queue information of the target task queue.
For specific implementation of the server related to the present application, reference may be made to each embodiment of the method for determining the number of concurrent threads in the thread pool, which is not described herein again.
An embodiment of the present application provides a computer-readable storage medium, in which computer instructions are stored, and when the computer instructions are executed on a communication apparatus, the communication apparatus is caused to perform the following steps: receiving a target task request, and determining to store the target task request to a target task queue of N task queues according to a target loan service process used for requesting processing of the target task request; and determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan service process and the target queue information of the target task queue. The computer includes an electronic device.
The electronic terminal equipment comprises a mobile phone, a tablet computer, a personal digital assistant, wearable equipment and the like.
The computer-readable storage medium may be an internal storage unit of the electronic device described in the above embodiments, for example, a hard disk or a memory of the electronic device. The computer readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the electronic device. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the electronic device. Computer-readable storage media are used to store computer-executable instructions and data as well as other computer-executable instructions and data needed by electronic devices. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
For specific implementation of the computer-readable storage medium related to the present application, reference may be made to each embodiment of the method for determining the number of concurrent threads in the thread pool, which is not described herein again.
Embodiments of the present application provide a computer program product, wherein the computer program product comprises a computer program operable to cause a computer to perform some or all of the steps of any of the thread pool concurrent thread count determination methods as described in the above method embodiments, and the computer program product may be a software installation package.
It should be noted that, for the sake of simplicity, any embodiment of the method for determining the number of concurrent threads in a thread pool is described as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the described order of actions, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will recognize that the embodiments described in this specification are preferred embodiments and that no acts are necessarily required to achieve the ends of this application.
The above embodiments of the present application are introduced in detail, and the principle and the implementation of the method for determining the number of concurrent threads in a thread pool according to the present application are explained in detail herein by applying specific examples, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the method for determining the number of concurrent threads in the thread pool according to the present application, the specific implementation and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present application.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
It will be understood by those skilled in the art that all or part of the steps in the various methods of any of the above-described method embodiments of the thread pool concurrent thread count determination method may be implemented by a program that instructs associated hardware to perform the steps, where the program may be stored in a computer-readable memory, where the memory may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
It is apparent that those skilled in the art can make various changes and modifications to a thread pool concurrent thread number determination method provided herein without departing from the spirit and scope of the present application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for determining the number of concurrent threads in a thread pool is applied to a financial server, wherein the financial server comprises N task queues and M thread pools, and the method comprises the following steps:
receiving a target task request, and determining to store the target task request to a target task queue of the N task queues according to a target loan service process used for requesting processing of the target task request;
and determining the corresponding target concurrent threads in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan business process and the target queue information of the target task queue.
2. The method according to claim 1, wherein the target loan transaction process comprises a data source collection process and a loan data processing process, and the step of obtaining, according to a target thread pool of the M thread pools corresponding to the target loan transaction process, comprises:
if the target loan service process is the data source acquisition process, determining a target thread pool in the M thread pools according to a target data source corresponding to the data source acquisition process;
and if the target loan service process is the loan data processing process, determining a target thread pool in the M thread pools according to a target processing link corresponding to the loan data processing process.
3. The method of claim 2, wherein when the target loan transaction process is the data source collection process, the method further comprises:
acquiring a current time node, and determining whether the current time node belongs to a key time node of the target data source;
and if the current time node belongs to the key time node of the target data source, increasing the number of concurrent threads in the target thread pool.
4. The method of claim 3, further comprising:
determining a target data source group according to the data source group to which the target data source belongs, wherein the data source group is divided according to loan risk dimensions, and the data source group and a key time node have a mapping relation;
the determining whether the current time node belongs to a key time node of the target data source includes:
and determining whether the current time node belongs to a key time node corresponding to the target data source group.
5. The method of claim 4, further comprising:
if the current time node is located within a preset time range of a key time node corresponding to the target data source group, acquiring a target time difference between the current time node and the key time node;
determining a thread number gain coefficient according to the target time difference and the preset time range;
the determining the number of the corresponding target concurrent threads in the target thread pool according to the target queue information of the target task queue includes:
and determining the corresponding target concurrent thread number in the target thread pool according to the target queue information of the target task queue and the thread number gain coefficient.
6. The method according to claim 1, wherein the target queue information includes a business process included in the target task queue that is the same as the target loan business process, and a business process included in the target task queue that is associated with the target loan business process, and the determining the corresponding number of target concurrent threads in the target thread pool according to the target queue information of the target task queue includes:
determining a first quantity value according to the quantity of the business processes which are contained in the target task queue and are the same as the target loan business processes;
determining a second numerical value according to the quantity of the business processes related to the target loan business process and included in the target task queue;
and determining the corresponding target concurrent thread number in the target thread pool according to the first numerical value and the second numerical value.
7. The method of claim 6, further comprising:
acquiring first receiving time corresponding to a service flow which is the same as the target loan service flow and is included in the target task queue, and determining first receiving frequency according to the first receiving time;
acquiring second receiving time corresponding to a business process associated with the target loan business process and included in the target task queue, and determining second receiving frequency according to the second receiving time;
determining a first weight and a second weight according to the first receiving frequency and the second receiving frequency, wherein the magnitude relation of the first weight and the second weight and the magnitude relation of the first receiving frequency and the second receiving frequency are in positive correlation;
the determining the corresponding target concurrent thread number in the target thread pool according to the first numerical value and the second numerical value includes:
and determining the corresponding target concurrent thread number in the target thread pool according to the first number value, the first weight, the second number value and the second weight.
8. The method according to any one of claims 1-7, further comprising:
determining the memory occupancy rate corresponding to the financial server;
when the memory occupancy rate is greater than a first preset occupancy rate, determining an idle thread pool, wherein the idle thread pool is a thread pool with a concurrent thread occupancy rate less than a second preset occupancy rate;
and acquiring a first target thread pool with the minimum quantity of concurrent threads, and corresponding the target loan service process of the first target thread pool to the idle thread pool.
9. A thread pool concurrent thread count determination apparatus for use in a financial server, the financial server including N task queues and M thread pools, the apparatus comprising:
the receiving unit is used for receiving a target task request and determining to store the target task request to a target task queue in the N task queues according to a target loan service process used for requesting processing by the target task request;
and the determining unit is used for determining the corresponding target concurrent thread number in the target thread pool according to the target thread pool in the M thread pools corresponding to the target loan business process and the target queue information of the target task queue.
10. An electronic device comprising a processor, a memory, and computer-executable instructions stored on the memory and executable on the processor, which when executed cause the electronic device to perform the method of any of claims 1-8.
CN202210890624.XA 2022-07-27 2022-07-27 Thread pool concurrent thread number determining method and related product Pending CN115168012A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210890624.XA CN115168012A (en) 2022-07-27 2022-07-27 Thread pool concurrent thread number determining method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210890624.XA CN115168012A (en) 2022-07-27 2022-07-27 Thread pool concurrent thread number determining method and related product

Publications (1)

Publication Number Publication Date
CN115168012A true CN115168012A (en) 2022-10-11

Family

ID=83497150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210890624.XA Pending CN115168012A (en) 2022-07-27 2022-07-27 Thread pool concurrent thread number determining method and related product

Country Status (1)

Country Link
CN (1) CN115168012A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745254A (en) * 2023-12-06 2024-03-22 镁佳(北京)科技有限公司 Course generation method, course generation device, computer equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745254A (en) * 2023-12-06 2024-03-22 镁佳(北京)科技有限公司 Course generation method, course generation device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
WO2020211579A1 (en) Processing method, device and system for distributed bulk processing system
WO2018072687A1 (en) Resource scheduling method and apparatus, and filtered scheduler
CN110532205B (en) Data transmission method, data transmission device, computer equipment and computer readable storage medium
CN111078436B (en) Data processing method, device, equipment and storage medium
CN111061570B (en) Image calculation request processing method and device and terminal equipment
CN113918101B (en) Method, system, equipment and storage medium for writing data cache
CN110138688A (en) Dynamic adjusts method, apparatus, equipment and the readable storage medium storing program for executing of business interface
WO2018166145A1 (en) Method and device for batch offering of repayment data
US20240143392A1 (en) Task scheduling method, chip, and electronic device
CN110602004A (en) Supervision data reporting, electronic device, equipment and computer readable storage medium
CN114625533A (en) Distributed task scheduling method and device, electronic equipment and storage medium
CN112559476A (en) Log storage method for improving performance of target system and related equipment thereof
CN115168012A (en) Thread pool concurrent thread number determining method and related product
CN115129621A (en) Memory management method, device, medium and memory management module
CN113254222B (en) Task allocation method and system for solid state disk, electronic device and storage medium
CN110221914B (en) File processing method and device
CN112148467A (en) Dynamic allocation of computing resources
CN104679575A (en) Control system and control method for input and output flow
CN109062857A (en) A kind of new type of messages controller and its communication means that can be communicated between realization of High Speed multiprocessor
US6754658B1 (en) Database server processing system, method, program and program storage device
US9910893B2 (en) Failover and resume when using ordered sequences in a multi-instance database environment
CN112486638A (en) Method, apparatus, device and storage medium for executing processing task
US20230393782A1 (en) Io request pipeline processing device, method and system, and storage medium
CN210804421U (en) Server system
CN112311695B (en) On-chip bandwidth dynamic allocation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination