CN111176806B - Service processing method and device and computer readable storage medium - Google Patents

Service processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN111176806B
CN111176806B CN201911233963.5A CN201911233963A CN111176806B CN 111176806 B CN111176806 B CN 111176806B CN 201911233963 A CN201911233963 A CN 201911233963A CN 111176806 B CN111176806 B CN 111176806B
Authority
CN
China
Prior art keywords
thread
thread group
groups
group
specified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911233963.5A
Other languages
Chinese (zh)
Other versions
CN111176806A (en
Inventor
王培林
陈煜�
周继恩
尹祥龙
袁野
邓昶
王忠昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN201911233963.5A priority Critical patent/CN111176806B/en
Publication of CN111176806A publication Critical patent/CN111176806A/en
Application granted granted Critical
Publication of CN111176806B publication Critical patent/CN111176806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a business processing method, a device, a system and a computer readable storage medium, wherein the method comprises the following steps: acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to a plurality of sub-services, and configuring a plurality of queues among the plurality of thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues. By utilizing the method, the service to be processed can be decomposed into a plurality of sub-services, and independent multithreading operation is carried out on each sub-service, so that the processing efficiency of the complex service is improved.

Description

Service processing method and device and computer readable storage medium
Technical Field
The invention belongs to the field of data processing, and particularly relates to a service processing method, a service processing device and a computer readable storage medium.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
With the complexity of the business scenario, multiple steps with different processing speeds may be nested in a complex business. To maintain overall processing performance for complex traffic, multi-threaded processing is often employed. However, since the number of threads required for each step in a complex service may vary greatly, it is difficult to determine an appropriate number of threads, and it is difficult to perform independent multithreading operation on each of the steps of one complex service. Thus, the processing efficiency of complex traffic is low.
Disclosure of Invention
In order to solve the problems in the prior art, a service processing method, a device and a computer readable storage medium are provided, and the problems can be solved by using the method, the device and the computer readable storage medium.
The present invention provides the following.
In a first aspect, a service processing method is provided, including: acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to a plurality of sub-services, and configuring a plurality of queues among the plurality of thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues.
In some possible embodiments, configuring a plurality of queues between a plurality of thread groups according to a preset process flow includes: confirming adjacent thread groups with direct dependency relations in the thread groups according to a preset processing flow; configuring a unique input queue for a subsequent thread group in the adjacent thread groups; and configuring an output queue for the preamble thread group based on the input queue of the subsequent thread group in the adjacent thread groups, so that the preamble thread group transfers the processed data to the subsequent thread group.
In some possible implementations, processing, by each of the plurality of thread groups, a corresponding sub-service further includes: determining a first thread group in the plurality of thread groups, wherein the first thread group is used for executing a sub-service with the highest priority in the plurality of sub-services; external data is acquired by the first thread group, and corresponding sub-services are processed through each of the plurality of thread groups.
In some possible embodiments, the method further comprises: the number of threads in each thread group is dynamically adjusted according to the thread busy state of each thread group as the corresponding sub-service is processed by each thread group of the plurality of thread groups.
In some possible implementations, dynamically adjusting the number of threads in each thread group according to the thread busy state of each thread group further includes: the number of threads in the first thread group is dynamically adjusted by the external data source capabilities providing external data.
In some possible implementations, dynamically adjusting the number of threads in each thread group according to the thread busy state of each thread group further includes: detecting the quantity of data to be processed in an input queue of a specified thread group in a plurality of thread groups in each monitoring period, and taking the quantity as a first value of the specified thread group; the number of threads in the specified thread group is dynamically adjusted based on the first value of the specified thread group.
In some possible implementations, dynamically adjusting the number of threads in each thread group according to the thread busy state of each thread group further includes: detecting the thread waiting times of the specified thread group in each monitoring period to be used as a second value of the specified thread group; dynamically adjusting the number of threads in the specified thread group based on the second value of the specified thread group.
In some possible embodiments, the method further comprises: if the first value of the appointed thread group exceeds the preset threshold value, the current thread number of the appointed thread group is increased until the first value is detected to be reduced to be not exceeding the preset threshold value; and/or, if the second value of the appointed thread group is a non-0 value, the current thread number of the appointed thread group is decremented until the second value is detected to be reduced to the 0 value; wherein the preset threshold is determined by the current thread count of the specified thread group.
In some possible embodiments, the method further comprises: responding to a preset event, stopping dynamically adjusting the thread number in each thread group; the preset event is a preset action and/or the thread number adjustment range of each thread group reaches a preset convergence degree.
In some possible embodiments, the method further comprises: when the service to be processed is finished, generating an end mark by the first thread group and sequentially transmitting the end mark to other thread groups in the plurality of thread groups through the queue; the other thread groups of the plurality of thread groups end themselves running when the end tag is read.
In a second aspect, there is provided a service processing apparatus, including: the decomposition unit is used for acquiring the service to be processed and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; the creation unit is used for creating a plurality of thread groups corresponding to the plurality of sub-services and configuring a plurality of queues among the plurality of thread groups according to a preset processing flow; and the processing unit is used for processing the corresponding sub-business through each thread group in the plurality of thread groups and transmitting data among the plurality of thread groups by utilizing the plurality of queues.
In some possible implementations, the creating unit is further configured to: confirming adjacent thread groups with direct dependency relations in the thread groups according to a preset processing flow; configuring a unique input queue for a subsequent thread group in the adjacent thread groups; and configuring an output queue for the preamble thread group based on the input queue of the subsequent thread group in the adjacent thread groups, so that the preamble thread group transfers the processed data to the subsequent thread group.
In some possible embodiments, the processing unit is further configured to: determining a first thread group in the plurality of thread groups, wherein the first thread group is used for executing a sub-service with the highest priority in the plurality of sub-services; external data is acquired by the first thread group, and corresponding sub-services are processed through each of the plurality of thread groups.
In some possible embodiments, the device further comprises a dynamic adjustment unit for: the number of threads in each thread group is dynamically adjusted according to the thread busy state of each thread group as the corresponding sub-service is processed by each thread group of the plurality of thread groups.
In some possible embodiments, the dynamic adjustment unit is further configured to:
the number of threads in the first thread group is dynamically adjusted by the external data source capabilities providing external data.
In some possible embodiments, the dynamic adjustment unit is further configured to: detecting the quantity of data to be processed in an input queue of a specified thread group in a plurality of thread groups in each monitoring period, and taking the quantity as a first value of the specified thread group; the number of threads in the specified thread group is dynamically adjusted based on the first value of the specified thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to: detecting the thread waiting times of the specified thread group in each monitoring period to be used as a second value of the specified thread group; dynamically adjusting the number of threads in the specified thread group based on the second value of the specified thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to: if the first value of the appointed thread group exceeds the preset threshold value, the current thread number of the appointed thread group is increased until the first value is detected to be reduced to be not exceeding the preset threshold value; and/or, if the second value of the appointed thread group is a non-0 value, the current thread number of the appointed thread group is decremented until the second value is detected to be reduced to the 0 value; wherein the preset threshold is determined by the current thread count of the specified thread group.
In some possible embodiments, the processing unit is further configured to: responding to a preset event, stopping dynamically adjusting the thread number in each thread group; the preset event is a preset action and/or the thread number adjustment range of each thread group reaches a preset convergence degree.
In some possible embodiments, the processing unit is further configured to: when the service to be processed is finished, generating an end mark by the first thread group and sequentially transmitting the end mark to other thread groups in the plurality of thread groups through the queue; the other thread groups of the plurality of thread groups end themselves running when the end tag is read.
In a third aspect, a service processing apparatus is provided, including: one or more multi-core processors; a memory for storing one or more programs; when executed by one or more multi-core processors, cause the one or more multi-core processors to implement: acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to a plurality of sub-services, and configuring a plurality of queues among the plurality of thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues.
In a fourth aspect, there is provided a computer readable storage medium storing a program which, when executed by a multi-core processor, causes the multi-core processor to perform a method as in the first aspect.
The above-mentioned at least one technical scheme that this application embodiment adopted can reach following beneficial effect: in this embodiment, the service to be processed is decomposed into a plurality of sub-services, and each thread group is utilized to process the corresponding sub-service, so that independent multithreading operation can be performed on each sub-service, thereby further improving the processing efficiency of the complex service, realizing connection between thread groups and realizing data transfer through a queue, and enabling a plurality of sub-tasks to maintain original processing logic.
It should be understood that the foregoing description is only an overview of the technical solutions of the present invention, so that the technical means of the present invention may be more clearly understood and implemented in accordance with the content of the specification. The following specific embodiments of the present invention are described in order to make the above and other objects, features and advantages of the present invention more comprehensible.
Drawings
The advantages and benefits described herein, as well as other advantages and benefits, will become apparent to those of ordinary skill in the art upon reading the following detailed description of the exemplary embodiments. The drawings are only for purposes of illustrating exemplary embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
Fig. 1 is a flow chart of a service processing method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a sub-business process according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a thread group flow for processing the sub-business flow of FIG. 2, in accordance with an embodiment of the present invention;
FIG. 4 is an implementation core class diagram for implementing the thread group flow in FIG. 3 in accordance with one embodiment of the present invention;
fig. 5 is a schematic structural diagram of a service processing device according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of a service processing device according to another embodiment of the present invention.
Fig. 7 is a schematic structural view of a computer-readable storage medium according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the present invention, it should be understood that terms such as "comprises" or "comprising," etc., are intended to indicate the presence of features, numbers, steps, acts, components, portions, or combinations thereof disclosed in the specification, and are not intended to exclude the possibility of the presence of one or more other features, numbers, steps, acts, components, portions, or combinations thereof.
In addition, it should be noted that, without conflict, the embodiments of the present invention and the features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
When complex business to be processed is processed, the business to be processed can be decomposed into a plurality of sub-businesses according to a preset processing flow; creating a plurality of thread groups corresponding to a plurality of sub-services, and configuring a plurality of queues among the plurality of thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues. Therefore, independent multithreading operation can be carried out on each sub-service based on the corresponding thread group, the thread number proportion can be adjusted based on the processing speed difference of each sub-service, the processing efficiency of complex service is further improved, connection among the thread groups is realized through a queue, data transmission is realized, and therefore the original processing logic of a plurality of sub-tasks can be maintained.
Having described the basic principles of the present invention, various non-limiting embodiments of the invention are described in detail below.
Fig. 1 schematically shows a flow diagram of a business processing method 100 according to an embodiment of the invention.
As shown in fig. 1, the method 100 may include:
s101, acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow;
specifically, the preset processing flow refers to any flow that can complete the service to be processed, and can be determined according to the specific situation of the service to be processed, which is not limited in the disclosure. Alternatively, the preset process flow may have complex asynchronous processing logic. Based on this, in this embodiment, a policy of decomposing the service to be processed according to a preset processing flow may be adopted. Specifically, the service to be processed may be divided into a plurality of ordered sub-services according to a preset processing flow, so that the plurality of sub-services are sequentially executed to complete the whole service to be processed. The specific operation content of each sub-service may depend on the actual service scenario, for example, may be an operation of reading data from a database, or may be a specific data processing operation, which is not limited in this disclosure. There may be a direct data dependency between adjacently processed sub-services. For example, the service to be processed may be decomposed into a sub-service flow diagram as shown in fig. 2, and it can be seen that the sub-service flow includes sub-services 1 to 5 of the preset process flow.
As shown in fig. 1, the method 100 may further include:
s102, creating a plurality of thread groups corresponding to a plurality of sub-services, and configuring a plurality of queues among the plurality of thread groups according to a preset processing flow;
specifically, each thread group represents a set of threads, and the thread number of each thread group can be determined according to the difficulty level of the corresponding subtask. The queue is a first-in first-out queue. Each thread group passes data processed by itself to the next thread group through a queue.
For example, FIG. 3 shows a schematic diagram of a thread group flow for processing the sub-business flow in FIG. 2; it can be seen that the system comprises a plurality of thread groups and a plurality of queues, wherein the plurality of thread groups and the plurality of subtasks are in one-to-one correspondence, the subtask 1 is processed by the thread group 1, the output data of the thread group 1 is transmitted to the thread group 2 by the queue 1, the subtask 2 is processed by the thread group 2, the output data of the thread group 2 is transmitted to the thread group 3 and the thread group 4 by the queue 2 and the queue 3 respectively, the subtask 3 is processed by the thread group 3, the subtask 4 is processed by the thread group 4, the output data of the thread group 3 and the thread group 4 is transmitted to the thread group 5 by the queue 4, and the subtask 5 is executed by the thread group 5.
Further, in order to implement the above-mentioned multiple thread groups and multiple queues in a specific scenario, fig. 4 shows an implementation core class diagram for implementing the thread group flow in fig. 3 in this embodiment. Wherein, the MultiThreadStep class is used to implement each thread group in fig. 3, and is specifically used to be responsible for constructing, maintaining and destroying the thread group, and provide an input data queue and an output data queue for threads in the thread group, where there can be only one input queue (inputQueue) and one output queue (corresponding to outputQueue in fig. 4) or more output queues (corresponding to outputQueue in fig. 4), for example, thread group 2 shown in fig. 3 has 2 branches in succession, so that thread group 2 has two output queues, namely, queue 2 and queue 3; the TaskQueue class is used to implement the various queues in FIG. 3; the Process class is used for assembling the constructed multiple thread groups and the queues together according to a preset processing flow, so that the multiple thread groups are connected into a flow as shown in fig. 3, and after the flow is assembled, the Process can initiate or stop the running of the flow.
In some possible embodiments, configuring a plurality of queues between a plurality of thread groups according to a preset process flow includes: confirming adjacent thread groups with direct dependency relations in the thread groups according to a preset processing flow; configuring a unique input queue for a subsequent thread group in the adjacent thread groups; and configuring an output queue for the preamble thread group based on the input queue of the subsequent thread group in the adjacent thread groups, so that the preamble thread group transfers the processed data to the subsequent thread group. Wherein, the above-mentioned direct dependency relationship means that the input of the subsequent thread group in the adjacent thread group directly depends on the output of the preceding thread group. The above "preamble" and "postamble" are relative concepts. For example, taking thread groups 1-5 in FIG. 3 as an example, thread group 1 and thread group 2 form adjacent thread groups, where thread group 1 is a leading thread group and thread group 2 is a trailing thread group; thread group 2, thread group 3 and thread group 4 form adjacent thread groups, wherein thread group 2 becomes a leading thread group, and thread group 3 and thread group 4 become trailing thread groups; thread group 3, thread group 4, and thread group 5 constitute adjacent thread groups, where thread group 3 and thread group 4 become the leading thread group and thread group 5 becomes the trailing thread group. Thus, the plurality of thread groups can be caused to process corresponding subtasks in order.
As shown in fig. 1, the method 100 may further include:
step S103, processing corresponding sub-business through each thread group in the plurality of thread groups, and transmitting data among the plurality of thread groups by utilizing a plurality of queues.
In some possible implementations, processing, by each of the plurality of thread groups, a corresponding sub-service further includes: determining a first thread group in the plurality of thread groups, wherein the first thread group is used for executing a sub-service with the highest priority in the plurality of sub-services; external data is acquired by the first thread group, and corresponding sub-services are processed through each of the plurality of thread groups.
For example, as shown in fig. 3, taking e-commerce information synchronization service as an example of service to be processed, the service may be divided into 5 threads to combine 4 queues, and the implementation logic of each step is as follows: the thread group 1 is responsible for calling an E-commerce interface, acquiring a commodity list, and assembling basic commodity information and putting the basic commodity information into the queue 1; the thread group 2 is used for acquiring commodity basic information from the queue 1, acquiring commodity auxiliary information according to the basic information, and respectively putting the commodity basic information and the commodity auxiliary information into the queues 2 and 3; the thread group 3 acquires commodity basic information from the queue 2, stores the commodity basic information into a corresponding table and puts the commodity basic information into the queue 4; optionally, in order to improve efficiency and reduce pressure on the database, the thread 3 may adopt a policy of batch repository after acquiring 1000 pieces of commodity basic information in an accumulated manner; the thread group 4 acquires commodity auxiliary information from the queue 3, stores the commodity auxiliary information into a corresponding table and puts the commodity auxiliary information into the queue 4; alternatively, in order to improve efficiency and reduce pressure on the database, the thread group 3 may adopt a policy of batch repository after acquiring 1000 pieces of commodity auxiliary information in a cumulative manner; the thread group 5 acquires a table containing basic information of the commodity and a table containing affiliated information of the commodity from the queue 4, and establishes an association relationship between the commodity information and the affiliated information according to a minimum stock keeping unit (StockKeeping Unit, abbreviated as SKU) of the commodity.
In some possible embodiments, the method further comprises: the number of threads in each thread group is dynamically adjusted according to the thread busy state of each thread group as the corresponding sub-service is processed by each thread group of the plurality of thread groups. It should be noted that too many threads in each thread group may cause scheduling overhead, and too many threads may cause processing blocking, thereby affecting cache locality and overall performance. Thus, the present disclosure may dynamically adjust the number of threads independently for each thread group based on the thread busy state of each thread group.
In some possible implementations, dynamically adjusting the number of threads in each thread group according to the thread busy state of each thread group further includes: the number of threads in the first thread group is dynamically adjusted by the external data source capabilities providing external data.
In some possible implementations, dynamically adjusting the number of threads in each thread group according to the thread busy state of each thread group further includes: detecting the quantity of data to be processed in an input queue of a specified thread group in a plurality of thread groups in each monitoring period, and taking the quantity as a first value of the specified thread group; the number of threads in the specified thread group is dynamically adjusted based on the first value of the specified thread group. The specified thread group may be any thread group other than the first thread group. For example, as shown in fig. 3, the specified thread group may be thread group 2, the queue 1 is an input queue of thread group 2, and the amount of data to be processed in the queue 1 may be detected as the first value of thread group 2. It will be appreciated that when the difference between the first value of the specified thread group and the thread count of the specified thread group is large, the specified thread group belongs to a busy state, and the thread count in the specified thread group can be effectively dynamically adjusted according to the first value of the specified thread group.
In some possible implementations, dynamically adjusting the number of threads in each thread group according to the thread busy state of each thread group further includes: detecting the thread waiting times of the specified thread group in each monitoring period to be used as a second value of the specified thread group; dynamically adjusting the number of threads in the specified thread group based on the second value of the specified thread group. The specified thread group may be any thread group other than the first thread group. For example, as shown in fig. 3, the specified thread group may be thread group 2, the queue 1 is an input queue of thread group 2, and the number of times that the thread in thread group 2 fails to acquire data from queue 1 within a period of time may be detected as the second value of thread group 2. It will be appreciated that, ideally, the thread wait number should be 0, so when the thread wait number of the specified thread group is longer than 0, the specified thread group is in an idle state, and the number of threads in the specified thread group can be effectively dynamically adjusted according to the second value of the specified thread group.
In some possible embodiments, the method further comprises: if the first value of the appointed thread group exceeds the preset threshold value, the current thread number of the appointed thread group is increased until the first value is detected to be reduced to be not exceeding the preset threshold value; and/or, if the second value of the appointed thread group is a non-0 value, the current thread number of the appointed thread group is decremented until the second value is detected to be reduced to the 0 value; wherein the preset threshold is determined by the current thread count of the specified thread group. For example, the preset threshold may be 2 times the current thread count for the specified thread group. For example, assume that the specified thread groups are thread groups 2-5 in FIG. 3, assume that the current thread count for each specified thread group is set to N i I=2 to 5, it can be understood that the optimal state of the whole flow is: (1) The first value of each specified thread group is not more than 2N i The method comprises the steps of carrying out a first treatment on the surface of the (2) the second value of all specified thread groups is 0. Thus, when it is detected that the second value of any one of the specified thread groups is a non-0 value, the number of threads of that specified thread group is reduced by one until the second value is a 0 value; when the first value of the specified thread group exceeds 2N i The thread count of the specified thread group is increased by one until the first thread group of the specified thread groupA value of not more than 2N i . Therefore, the optimal thread number of each thread group can be calculated according to the actual running condition, and dynamic adjustment is carried out, so that the dynamic balance of the thread number among each thread group is realized.
In some possible embodiments, the method further comprises: responding to a preset event, stopping dynamically adjusting the thread number in each thread group; the preset event is a preset action and/or the thread number adjustment range of each thread group reaches a preset convergence degree. It will be appreciated that in order to reduce the performance impact of dynamically adjusting the number of threads per thread group, a run-time and dynamic adjustment switch may be provided, whereby the dynamic adjustment switch is turned on during the run-time, the optimal number of threads for all thread groups is obtained by the run-time, and then the dynamic adjustment switch is turned off after the run-time.
In some possible embodiments, the method further comprises: when the service to be processed is finished, generating an end mark by the first thread group and sequentially transmitting the end mark to other thread groups in the plurality of thread groups through the queue; the other thread groups of the plurality of thread groups end themselves running when the end tag is read. It will be appreciated that the overall process described above operates like a production pipeline, with each thread group corresponding to one production team, each queue corresponding to a warehouse, and each thread group passing processed data through the queue to the next thread group. Based on this, the first thread group is the first producer of the production line, when the business is finished, an end mark can be generated by the first thread group and sequentially transmitted to other thread groups through the queue, the other thread groups end the running of the other thread groups when the end mark is read, and when the end mark is transmitted to the last thread group, the whole thread group flow stops the running.
Thus, according to the method provided by the aspects of the embodiment of the invention, independent multithreading operation can be carried out on each sub-service, and the thread number proportion can be adjusted based on the processing speed difference of each sub-service, so that the processing efficiency of complex service is further improved, the connection between thread groups is realized through a queue, the data transmission is realized, and a plurality of sub-tasks can keep the original processing logic. In addition, the optimal thread number of each subtask can be calculated according to the actual running condition and dynamically adjusted.
Based on the same technical concept, the embodiment of the present invention further provides a service processing device, which is configured to execute the service processing method provided in any one of the foregoing embodiments. Fig. 5 is a schematic structural diagram of a service processing device according to an embodiment of the present invention.
As shown in fig. 5, the apparatus 500 includes:
a decomposition unit 501, configured to obtain a service to be processed, and decompose the service to be processed into a plurality of sub-services according to a preset processing flow;
a creating unit 502, configured to create a plurality of thread groups corresponding to a plurality of sub-services, and configure a plurality of queues between the plurality of thread groups according to a preset processing flow;
the processing unit 503 is configured to process the corresponding sub-service through each of the plurality of thread groups, and transfer data between the plurality of thread groups using the plurality of queues.
In some possible implementations, the creating unit 502 is further configured to: confirming adjacent thread groups with direct dependency relations in the thread groups according to a preset processing flow; configuring a unique input queue for a subsequent thread group in the adjacent thread groups; and configuring an output queue for the preamble thread group based on the input queue of the subsequent thread group in the adjacent thread groups, so that the preamble thread group transfers the processed data to the subsequent thread group.
In some possible implementations, the processing unit 503 is further configured to: determining a first thread group in the plurality of thread groups, wherein the first thread group is used for executing a sub-service with the highest priority in the plurality of sub-services; external data is acquired by the first thread group, and corresponding sub-services are processed through each of the plurality of thread groups.
In some possible embodiments, the device further comprises a dynamic adjustment unit for: the number of threads in each thread group is dynamically adjusted according to the thread busy state of each thread group as the corresponding sub-service is processed by each thread group of the plurality of thread groups.
In some possible embodiments, the dynamic adjustment unit is further configured to:
the number of threads in the first thread group is dynamically adjusted by the external data source capabilities providing external data.
In some possible embodiments, the dynamic adjustment unit is further configured to: detecting the quantity of data to be processed in an input queue of a specified thread group in a plurality of thread groups in each monitoring period, and taking the quantity as a first value of the specified thread group; the number of threads in the specified thread group is dynamically adjusted based on the first value of the specified thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to: detecting the thread waiting times of the specified thread group in each monitoring period to be used as a second value of the specified thread group; dynamically adjusting the number of threads in the specified thread group based on the second value of the specified thread group.
In some possible embodiments, the dynamic adjustment unit is further configured to: if the first value of the appointed thread group exceeds the preset threshold value, the current thread number of the appointed thread group is increased until the first value is detected to be reduced to be not exceeding the preset threshold value; and/or, if the second value of the appointed thread group is a non-0 value, the current thread number of the appointed thread group is decremented until the second value is detected to be reduced to the 0 value; wherein the preset threshold is determined by the current thread count of the specified thread group.
In some possible implementations, the processing unit 503 is further configured to: responding to a preset event, stopping dynamically adjusting the thread number in each thread group; the preset event is a preset action and/or the thread number adjustment range of each thread group reaches a preset convergence degree.
In some possible implementations, the processing unit 503 is further configured to: when the service to be processed is finished, generating an end mark by the first thread group and sequentially transmitting the end mark to other thread groups in the plurality of thread groups through the queue; the other thread groups of the plurality of thread groups end themselves running when the end tag is read.
Thus, according to the device provided by the aspects of the embodiment of the invention, independent multithreading operation can be carried out on each sub-service, and the thread number proportion can be adjusted based on the processing speed difference of each sub-service, so that the processing efficiency of complex service is further improved, the connection between thread groups is realized through the queue, the data transmission is realized, and the original processing logic of a plurality of sub-tasks can be maintained. In addition, the optimal thread number of each subtask can be calculated according to the actual running condition and dynamically adjusted.
It should be noted that, the service processing apparatus in the embodiment of the present application may implement each process of the foregoing embodiment of the service processing method, and achieve the same effects and functions, which are not described herein again.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a device, method, or computer-readable storage medium. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" device.
In some possible embodiments, a task processing device of the present invention may include at least one or more processors, and at least one memory. Wherein the memory stores a program which, when executed by the processor, causes the processor to perform the steps as shown in fig. 1: acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow; creating a plurality of thread groups corresponding to a plurality of sub-services, and configuring a plurality of queues among the plurality of thread groups according to a preset processing flow; the corresponding sub-traffic is processed by each of the plurality of thread groups and data is transferred between the plurality of thread groups using the plurality of queues.
The task processing device 6 according to this embodiment of the present invention is described below with reference to fig. 6. The device 6 shown in fig. 6 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the invention.
As shown in fig. 6, apparatus 6 may be embodied in the form of a general purpose computing device, including, but not limited to: at least one processor 10, at least one memory 20, a bus 60 connecting the different device components.
Bus 60 includes a data bus, an address bus, and a control bus.
Memory 20 may include volatile memory such as Random Access Memory (RAM) 21 and/or cache memory 22, and may further include Read Only Memory (ROM) 23.
Memory 20 may also include program modules 24, such program modules 24 including, but not limited to: operating devices, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The apparatus 6 may also communicate with one or more external devices 2 (e.g. a keyboard, pointing device, bluetooth device, etc.) as well as with one or more other devices. Such communication may be performed through an input/output (I/O) interface 40 and displayed on the display unit 30. Also, the device 6 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through a network adapter 50. As shown, the network adapter 50 communicates with other modules in the device 6 via a bus 60. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in connection with the apparatus 6, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID devices, tape drives, data backup storage devices, and the like.
Fig. 7 shows a computer readable storage medium for performing the method as described above.
In some possible implementations, aspects of the invention may also be embodied in the form of a computer-readable storage medium including program code for causing a processor to perform the method described above, when the program code is executed by the processor.
The above-described method includes a plurality of operations and steps shown in the above figures and not shown, and will not be described in detail herein.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, apparatus, or means, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 7, a computer readable storage medium 70 according to an embodiment of the present invention is described, which may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the computer-readable storage medium of the present invention is not limited thereto, and in this document, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, although the operations of the methods of the present invention are depicted in the drawings in a particular order, this is not required to either imply that the operations must be performed in that particular order or that all of the illustrated operations be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
While the spirit and principles of the present invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments nor does it imply that features of the various aspects are not useful in combination, nor are they useful in any combination, such as for convenience of description. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (20)

1. A method for processing a service, comprising:
acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow;
creating a plurality of thread groups corresponding to the plurality of sub-services;
confirming adjacent thread groups with direct dependency relations in a plurality of thread groups according to the preset processing flow;
Configuring a unique input queue for a subsequent thread group in the adjacent thread groups; the plurality of thread groups includes at least one thread group corresponding to a plurality of subsequent thread groups;
if the adjacent thread group only comprises one subsequent thread group, configuring an output queue of a preceding thread group according to an input queue of the subsequent thread group, so that the preceding thread group transmits processed data to the subsequent thread group;
if the adjacent thread group comprises a plurality of subsequent thread groups, configuring an output queue of a preceding thread group according to a plurality of input queues corresponding to the plurality of subsequent thread groups, so that the preceding thread group transmits the processed data to the subsequent thread group;
and processing corresponding sub-services through each thread group in the plurality of thread groups, and transmitting data among the plurality of thread groups by utilizing the plurality of queues.
2. The method of claim 1, wherein processing the corresponding sub-traffic by each of the plurality of thread groups further comprises:
determining a first thread group of the plurality of thread groups, the first thread group for executing a sub-service having a highest priority among the plurality of sub-services;
And acquiring external data by the first thread group, and processing corresponding sub-services by each thread group in the plurality of thread groups.
3. The method as recited in claim 2, further comprising:
and dynamically adjusting the number of threads in each thread group according to the thread busy state of each thread group when the corresponding sub-business is processed by each thread group in the plurality of thread groups.
4. The method of claim 3, wherein dynamically adjusting the number of threads in each thread group based on the thread busy state of each thread group further comprises:
the number of threads in the first thread group is dynamically adjusted by the performance of an external data source providing external data.
5. The method of claim 3, wherein dynamically adjusting the number of threads in each thread group based on the thread busy state of each thread group further comprises:
detecting the quantity of data to be processed in an input queue of a specified thread group in the plurality of thread groups in each monitoring period, wherein the quantity of data to be processed is used as a first value of the specified thread group;
dynamically adjusting the number of threads in the specified thread group according to a first value of the specified thread group.
6. The method of claim 5, wherein dynamically adjusting the number of threads in each thread group based on the thread busy state of each thread group, further comprises:
detecting the thread waiting times of the specified thread group in each monitoring period to be used as a second value of the specified thread group;
dynamically adjusting the number of threads in the specified thread group according to the second value of the specified thread group.
7. The method as recited in claim 6, further comprising:
if the first value of the specified thread group exceeds a preset threshold, increasing the current thread number of the specified thread group until the first value is detected to be reduced to be not exceeding the preset threshold; and/or the number of the groups of groups,
if the second value of the appointed thread group is a non-0 value, the current thread number of the appointed thread group is decremented until the second value is detected to be reduced to a 0 value;
wherein the preset threshold is determined by the current thread count of the specified thread group.
8. The method as recited in claim 2, further comprising:
responding to a preset event, stopping dynamically adjusting the thread number in each thread group;
The preset event is a preset action and/or the thread number adjustment amplitude of each thread group reaches a preset convergence degree.
9. The method as recited in claim 2, further comprising:
when the service to be processed is finished, generating an end mark by the first thread group and sequentially transmitting the end mark to other thread groups in the plurality of thread groups through a queue;
the other thread groups of the plurality of thread groups end themselves when the end flag is read.
10. A service processing apparatus, comprising:
the decomposition unit is used for acquiring the service to be processed and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow;
a creation unit, configured to create a plurality of thread groups corresponding to the plurality of sub-services;
confirming adjacent thread groups with direct dependency relations in a plurality of thread groups according to the preset processing flow;
configuring a unique input queue for a subsequent thread group in the adjacent thread groups; the plurality of thread groups includes at least one thread group corresponding to a plurality of subsequent thread groups;
if the adjacent thread group only comprises one subsequent thread group, configuring an output queue of a preceding thread group according to an input queue of the subsequent thread group, so that the preceding thread group transmits processed data to the subsequent thread group;
And the processing unit is used for processing the corresponding sub-business through each thread group in the plurality of thread groups and transmitting data among the plurality of thread groups by utilizing the plurality of queues.
11. The apparatus of claim 10, wherein the processing unit is further configured to:
determining a first thread group of the plurality of thread groups, the first thread group for executing a sub-service having a highest priority among the plurality of sub-services;
and acquiring external data by the first thread group, and processing corresponding sub-services by each thread group in the plurality of thread groups.
12. The apparatus of claim 11, further comprising a dynamic adjustment unit to:
and dynamically adjusting the number of threads in each thread group according to the thread busy state of each thread group when the corresponding sub-business is processed by each thread group in the plurality of thread groups.
13. The apparatus of claim 12, wherein the dynamic adjustment unit is further configured to:
the number of threads in the first thread group is dynamically adjusted by the performance of an external data source providing external data.
14. The apparatus of claim 12, wherein the dynamic adjustment unit is further configured to:
Detecting the quantity of data to be processed in an input queue of a specified thread group in the plurality of thread groups in each monitoring period, wherein the quantity of data to be processed is used as a first value of the specified thread group;
dynamically adjusting the number of threads in the specified thread group according to a first value of the specified thread group.
15. The apparatus of claim 14, wherein the dynamic adjustment unit is further configured to:
detecting the thread waiting times of the specified thread group in each monitoring period to be used as a second value of the specified thread group;
dynamically adjusting the number of threads in the specified thread group according to the second value of the specified thread group.
16. The apparatus of claim 15, wherein the dynamic adjustment unit is further configured to:
if the first value of the specified thread group exceeds a preset threshold, increasing the current thread number of the specified thread group until the first value is detected to be reduced to be not exceeding the preset threshold; and/or the number of the groups of groups,
if the second value of the appointed thread group is a non-0 value, the current thread number of the appointed thread group is decremented until the second value is detected to be reduced to a 0 value;
Wherein the preset threshold is determined by the current thread count of the specified thread group.
17. The apparatus of claim 11, wherein the processing unit is further configured to:
responding to a preset event, stopping dynamically adjusting the thread number in each thread group;
the preset event is a preset action and/or the thread number adjustment amplitude of each thread group reaches a preset convergence degree.
18. The apparatus of claim 11, wherein the processing unit is further configured to:
when the service to be processed is finished, generating an end mark by the first thread group and sequentially transmitting the end mark to other thread groups in the plurality of thread groups through a queue;
the other thread groups of the plurality of thread groups end themselves when the end flag is read.
19. A service processing apparatus, comprising:
one or more multi-core processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more multi-core processors, cause the one or more multi-core processors to implement:
acquiring a service to be processed, and decomposing the service to be processed into a plurality of sub-services according to a preset processing flow;
Creating a plurality of thread groups corresponding to the plurality of sub-services;
confirming adjacent thread groups with direct dependency relations in a plurality of thread groups according to the preset processing flow;
configuring a unique input queue for a subsequent thread group in the adjacent thread groups; the plurality of thread groups includes at least one thread group corresponding to a plurality of subsequent thread groups;
if the adjacent thread group only comprises one subsequent thread group, configuring an output queue of a preceding thread group according to an input queue of the subsequent thread group, so that the preceding thread group transmits processed data to the subsequent thread group;
if the adjacent thread group comprises a plurality of subsequent thread groups, configuring an output queue of a preceding thread group according to a plurality of input queues corresponding to the plurality of subsequent thread groups, so that the preceding thread group transmits the processed data to the subsequent thread group;
and processing corresponding sub-services through each thread group in the plurality of thread groups, and transmitting data among the plurality of thread groups by utilizing the plurality of queues.
20. A computer readable storage medium storing a program which, when executed by a multi-core processor, causes the multi-core processor to perform the method of any of claims 1-9.
CN201911233963.5A 2019-12-05 2019-12-05 Service processing method and device and computer readable storage medium Active CN111176806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911233963.5A CN111176806B (en) 2019-12-05 2019-12-05 Service processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911233963.5A CN111176806B (en) 2019-12-05 2019-12-05 Service processing method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111176806A CN111176806A (en) 2020-05-19
CN111176806B true CN111176806B (en) 2024-02-23

Family

ID=70624546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911233963.5A Active CN111176806B (en) 2019-12-05 2019-12-05 Service processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111176806B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148455B (en) * 2020-09-29 2021-07-27 星环信息科技(上海)股份有限公司 Task processing method, device and medium
CN113905273A (en) * 2021-09-29 2022-01-07 上海阵量智能科技有限公司 Task execution method and device
CN114595070B (en) * 2022-05-10 2022-08-12 上海登临科技有限公司 Processor, multithreading combination method and electronic equipment
CN115599558B (en) * 2022-12-13 2023-03-10 无锡学院 Task processing method and system for industrial Internet platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220033A (en) * 2017-07-05 2017-09-29 百度在线网络技术(北京)有限公司 Method and apparatus for controlling thread pool thread quantity
CN109582455A (en) * 2018-12-03 2019-04-05 恒生电子股份有限公司 Multithreading task processing method, device and storage medium
CN109634761A (en) * 2018-12-17 2019-04-16 深圳乐信软件技术有限公司 A kind of system mode circulation method, apparatus, computer equipment and storage medium
CN110413390A (en) * 2019-07-24 2019-11-05 深圳市盟天科技有限公司 Thread task processing method, device, server and storage medium
CN110457124A (en) * 2019-08-06 2019-11-15 中国工商银行股份有限公司 For the processing method and its device of business thread, electronic equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9183109B2 (en) * 2010-05-25 2015-11-10 Intel Corporation Method and system for analyzing the performance of multi-threaded applications

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220033A (en) * 2017-07-05 2017-09-29 百度在线网络技术(北京)有限公司 Method and apparatus for controlling thread pool thread quantity
CN109582455A (en) * 2018-12-03 2019-04-05 恒生电子股份有限公司 Multithreading task processing method, device and storage medium
CN109634761A (en) * 2018-12-17 2019-04-16 深圳乐信软件技术有限公司 A kind of system mode circulation method, apparatus, computer equipment and storage medium
CN110413390A (en) * 2019-07-24 2019-11-05 深圳市盟天科技有限公司 Thread task processing method, device, server and storage medium
CN110457124A (en) * 2019-08-06 2019-11-15 中国工商银行股份有限公司 For the processing method and its device of business thread, electronic equipment and medium

Also Published As

Publication number Publication date
CN111176806A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111176806B (en) Service processing method and device and computer readable storage medium
US8572622B2 (en) Reducing queue synchronization of multiple work items in a system with high memory latency between processing nodes
US9262220B2 (en) Scheduling workloads and making provision decisions of computer resources in a computing environment
US9792158B2 (en) Framework to improve parallel job workflow
CN102663552B (en) Dynamic workflow engine supporting online self-evolution
US9645743B2 (en) Selective I/O prioritization by system process/thread
Xu et al. Adaptive task scheduling strategy based on dynamic workload adjustment for heterogeneous Hadoop clusters
CN109992407B (en) YARN cluster GPU resource scheduling method, device and medium
US11507419B2 (en) Method,electronic device and computer program product for scheduling computer resources in a task processing environment
CN104572290A (en) Method and device for controlling message processing threads
US9817696B2 (en) Low latency scheduling on simultaneous multi-threading cores
KR20140070231A (en) Map-reduce workflow processing device and method, and storage media storing the same
US9158588B2 (en) Flexible task and thread binding with preferred processors based on thread layout
US9049164B2 (en) Dynamic message retrieval by subdividing a message queue into sub-queues
CN112579267A (en) Decentralized big data job flow scheduling method and device
CN102708006A (en) Processing optimization load adjustment
US20190138472A1 (en) Validation of correctness of interrupt triggers and delivery
US9507637B1 (en) Computer platform where tasks can optionally share per task resources
US11256543B2 (en) Processor and instruction scheduling method
US20230096015A1 (en) Method, electronic deviice, and computer program product for task scheduling
US9483322B1 (en) Heterogenous core microarchitecture
CN107562527B (en) Real-time task scheduling method for SMP (symmetric multi-processing) on RTOS (remote terminal operating system)
JP2012164050A (en) Information processing device, task management method and program
CN109800064B (en) Processor and thread processing method
CN117539595A (en) Cooperative scheduling method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant