CN114880120A - Data processing method, device, equipment and storage medium - Google Patents

Data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114880120A
CN114880120A CN202210503729.5A CN202210503729A CN114880120A CN 114880120 A CN114880120 A CN 114880120A CN 202210503729 A CN202210503729 A CN 202210503729A CN 114880120 A CN114880120 A CN 114880120A
Authority
CN
China
Prior art keywords
subtask
subtasks
processing time
time length
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210503729.5A
Other languages
Chinese (zh)
Other versions
CN114880120B (en
Inventor
郑维
吴海英
吴鹏
蒋宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Consumer Finance Co Ltd
Original Assignee
Mashang Consumer Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Consumer Finance Co Ltd filed Critical Mashang Consumer Finance Co Ltd
Priority to CN202210503729.5A priority Critical patent/CN114880120B/en
Publication of CN114880120A publication Critical patent/CN114880120A/en
Application granted granted Critical
Publication of CN114880120B publication Critical patent/CN114880120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Exchange Systems With Centralized Control (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a data processing method, a device, equipment and a storage medium, wherein the data processing method comprises the following steps: responding to an access request of a request platform to a current interface, and determining a plurality of subtasks corresponding to the current interface, wherein the calling relations among the subtasks are mutually independent; acquiring historical processing time corresponding to each subtask, wherein the historical processing time of each subtask is obtained according to a plurality of first processing times of the subtask, and the first processing times are processing times obtained in a preset historical time period; determining an execution sequence among a plurality of subtasks according to the historical processing time length; calling a plurality of subtasks according to the execution sequence to execute the corresponding tasks, and obtaining the execution result corresponding to each subtask; and sending the execution result to the request platform. The dynamic balance between the interface response efficiency and the thread resource consumption can be realized.

Description

Data processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, apparatus, device, and storage medium.
Background
When the software system provides an interface for the request platform, the software system can return an execution result to the request platform after processing of complex logic tasks, and therefore the software system is required to respond to the request of the request platform in time.
At present, for a request of a request platform received through a provided interface, a processing mode is adopted to perform serial call or parallel call by using task processing logic of the interface to obtain a corresponding execution result, and the execution result is returned to the request platform through the interface. However, this processing method has problems of low processing efficiency and high consumption of thread resources.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, data processing equipment and a storage medium, which are used for solving the problem that the current interface cannot respond to a request of a request platform in time.
In a first aspect, an embodiment of the present application provides a data processing method, including: responding to an access request of a request platform to a current interface, and determining a plurality of subtasks corresponding to the current interface, wherein the calling relations among the subtasks are mutually independent; acquiring historical processing time corresponding to each subtask, wherein the historical processing time of each subtask is obtained according to a plurality of first processing times of the subtask, and the first processing times are processing times obtained in a preset historical time period; determining an execution sequence among a plurality of subtasks according to the historical processing time length; calling a plurality of subtasks according to the execution sequence to execute the corresponding tasks, and obtaining the execution result corresponding to each subtask; and sending the execution result to the request platform.
In a possible implementation manner, before obtaining the historical processing time length corresponding to each subtask, the method further includes: processing a plurality of second processing durations of the subtasks according to a preset rule to obtain a target duration, wherein the second processing durations are the processing durations obtained in a preset time; acquiring the historical processing time length corresponding to each subtask, wherein the historical processing time length comprises the following steps: and determining the currently acquired target time length as the historical processing time length.
In one possible embodiment, the plurality of subtasks includes: the method comprises the following steps that a first subtask and at least two second subtasks determine the execution sequence among the subtasks according to historical processing duration, and the method comprises the following steps: and determining that the execution sequence between the first subtask and each second subtask is parallel execution and the execution sequence between at least two second subtasks is serial execution according to the historical processing time length corresponding to the first subtask and the historical processing time length corresponding to the second subtask, wherein the first subtask is a subtask with the longest historical processing time length in the plurality of subtasks, and the historical processing time length of the first subtask is greater than or equal to the sum of the historical processing time lengths of the at least two second subtasks.
In one possible embodiment, the plurality of subtasks further includes: the third subtask determines an execution sequence among the plurality of subtasks according to the historical processing time length, and further includes: and determining the execution sequence of the third subtask and the first subtask to be parallel execution according to the historical processing time length of the third subtask and the historical processing time length of the first subtask, wherein the historical processing time length of the third subtask is greater than the historical processing time length of the second subtask, and the sum of the historical processing time lengths of the plurality of second subtasks and the third subtasks is greater than the historical processing time length of the first subtask.
In a possible implementation manner, before the plurality of subtasks are called according to the execution sequence to execute the corresponding task and the execution result corresponding to each subtask is obtained, the method further includes: and if the subtask needs the target parameter under the condition of executing the corresponding task, calling a non-current interface to acquire the target parameter.
In a possible implementation manner, after the plurality of subtasks are called according to the execution order to execute the corresponding task and the execution result corresponding to each subtask is obtained, the method further includes: if the target parameter is obtained, obtaining the response time length of a non-current interface; and if the response time length is greater than the preset threshold value, sending the response time length to the terminal corresponding to the non-current interface, wherein the response time length is used for indicating the terminal to update the non-current interface.
In one possible embodiment, the method further comprises: and if the target parameters are not acquired, processing the subtasks according to a preset fusing mode or a degradation mode.
In one possible embodiment, the method further comprises: and if the access request of the request platform to the current interface is responded for the first time, the execution sequence among the multiple subtasks is executed in parallel.
In a second aspect, an embodiment of the present application provides a data processing apparatus, including:
the response module is used for responding to an access request of the request platform to the current interface and determining a plurality of subtasks corresponding to the current interface, wherein the calling relations among the subtasks are mutually independent;
the acquisition module is used for acquiring the historical processing time length corresponding to each subtask, wherein the historical processing time length of each subtask is obtained according to a plurality of first processing time lengths of the subtask, and the first processing time lengths are the processing time lengths obtained in a preset historical time period;
the determining module is used for determining the execution sequence among the plurality of subtasks according to the historical processing time length;
the execution module is used for calling the multiple subtasks to execute the corresponding tasks according to the execution sequence to obtain the execution result corresponding to each subtask;
and the sending module is used for sending the execution result to the request platform.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the data processing method according to any one of the first aspect is implemented.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on an electronic device, the electronic device is caused to execute the data processing method according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product, which includes a computer program, when the computer program runs on an electronic device, causes the electronic device to execute the data processing method according to any one of the first aspect.
An embodiment of the application provides a data processing method, a device, equipment and a storage medium, wherein the data processing method comprises the following steps: responding to an access request of a request platform to a current interface, and determining a plurality of subtasks corresponding to the current interface, wherein the calling relations among the subtasks are mutually independent; acquiring historical processing time corresponding to each subtask, wherein the historical processing time of each subtask is obtained according to a plurality of first processing times of the subtask, and the first processing times are processing times obtained in a preset historical time period; determining an execution sequence among a plurality of subtasks according to the historical processing time length; calling a plurality of subtasks according to the execution sequence to execute the corresponding tasks, and obtaining the execution result corresponding to each subtask; and sending the execution result to the request platform. According to the embodiment of the application, the execution sequence is determined according to the historical processing time length when the plurality of subtasks are processed each time, so that the processing modes of the plurality of subtasks can be updated in real time, the overall processing time length of the plurality of subtasks is reduced, the consumption of thread resources is reduced, and the dynamic balance between the interface response efficiency and the thread resource consumption is realized.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is an execution main body diagram of a data processing method provided in the present application;
fig. 2 is a schematic flowchart of a data processing method according to an embodiment of the present application;
FIG. 3 is a diagram illustrating an execution sequence of subtasks according to an embodiment of the present application;
fig. 4 is a schematic diagram of a data processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an execution sequence of subtasks according to another embodiment of the present application;
fig. 6 is a block diagram of a data processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
Generally, in order to increase the response speed of the current interface to the request platform, on one hand, the efficiency of the current interface for processing the request task needs to be increased, and on the other hand, the speed of calling the non-current interface parameters needs to be increased. In the prior art, a mode of executing request tasks in series and a mode of calling a non-current interface without any processing are adopted, so that the response speed of the current interface to a request platform is reduced, and a plurality of subtasks are simply called in parallel, so that the consumption of thread resources is increased. According to the embodiment of the application, the execution sequence is determined according to the historical processing time length when the plurality of subtasks are processed each time, so that the processing modes of the plurality of subtasks can be updated in real time, the overall processing time length of the plurality of subtasks is reduced, the consumption of thread resources is reduced, and the dynamic balance between the interface response efficiency and the thread resource consumption is realized.
Fig. 1 is an execution body of a data processing method provided in the present application. As shown in fig. 1, the application scenario may include: a request platform 11, a first server 12, a second server 13 and a third server 14. The current interface is deployed in the first server 12, the non-current interface a1 of the current interface is deployed in the second server 13, and the non-current interface a2 of the current interface is deployed in the third server 14. When the request platform 11 needs to call the current interface, an access request is sent to the current interface, and the current interface performs processing on a corresponding request task according to the access request, wherein when the current interface processes the request task, parameters of a non-current interface a1 and/or a non-current interface a2 need to be called, and then the request task is processed according to the called parameters to obtain an execution result, which is returned to the request platform 11.
In addition, the embodiment of the application can be applied to any interface calling scene, such as approval of a credit application.
It should be noted that fig. 1 may be one of application scenarios provided in the embodiment of the present application, and the embodiment of the present application does not limit a specific application scenario. The data processing method provided by the embodiment of the application can be applied to a server, and the server can be an independent server or a service cluster.
The technical solution of the present application will be described in detail below with reference to specific examples. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a schematic flowchart of a data processing method according to an embodiment of the present application. Next, a description will be given by taking a server as an execution subject.
As shown in fig. 2, the data processing method includes the steps of:
s201, responding to an access request of a request platform to a current interface, and determining a plurality of subtasks corresponding to the current interface.
In the embodiment of the application, the access request of the request platform does not access the current interface for the first time, so that the corresponding multiple subtasks can be directly determined when the current interface is accessed. In the case that the access request is to access the current interface for the first time, referring to fig. 1, when the first server 12 receives the access request of the request platform, it may be determined that the interface for processing the access request is the current interface deployed in the first server according to parameters carried by the access request, then a plurality of sub-logics of the current interface are obtained, and a request processing task corresponding to the request of the access request is split into a plurality of sub-tasks through the plurality of sub-logics. When the current interface is accessed by the subsequent access request, a plurality of corresponding subtasks can be directly determined.
The calling relations among the multiple subtasks are independent, namely no strong dependency relation exists between any two subtasks, and the processing process of any subtask does not depend on the execution result of another subtask.
S202, acquiring the historical processing time length corresponding to each subtask.
The historical processing time length of each subtask is obtained according to a plurality of first processing time lengths of the subtask, and the first processing time lengths are obtained in a preset historical time period.
In the embodiment of the application, if the plurality of subtasks corresponding to the current interface are not processed for the first time, since the processing time length of each subtask is recorded and stored when the subtasks are processed historically, the historical processing time length can be determined by acquiring the pre-stored processing time length.
Further, the history processing time period may be an average time period of the processing time periods of the preset history period. For example, the historical processing time length of the currently acquired first subtask may be an average processing time length for processing the first subtask on the last day or the last week.
According to the method and the device, the historical processing time length corresponding to each subtask is obtained, the processing time length of each subtask can be updated in real time, and then the execution sequence of the plurality of subtasks can be better determined.
S203, determining the execution sequence among the plurality of subtasks according to the historical processing time length.
Wherein the execution sequence includes: the parallel execution and/or the serial execution may be, for example, parallel execution of at least one of the plurality of subtasks and the other subtasks, and serial execution between the other subtasks.
Exemplarily, referring to fig. 3, comprising: subtask A, subtask B, subtask C, subtask D, and subtask E. The historical processing time of the subtask A is 0.3 second, the historical processing time of the subtask B is 0.5 second, the historical processing time of the subtask C is 0.4 second, the historical processing time of the subtask D is 0.9 second, and the historical processing time of the subtask E is 1.8 second. In fig. 3, it may be determined that the subtask a, the subtask B, the subtask C, and the subtask D are executed serially, and the thread a is used for processing, that is, the subtask a is executed serially to the subtask D, and the subtask E is executed in parallel with the subtask a to the subtask D, that is, the subtask E is executed by the thread B. Thus, the thread a and the thread B are adopted to execute in parallel, the total processing time from the subtask A to the subtask D is 2.1 seconds, and the processing time of the subtask E is 1.8 seconds, so that the subtask A, the subtask B, the subtask C, the subtask D and the subtask E can be determined to be processed, and only 2.1 seconds are needed. If the current mode that all subtasks are executed in series is adopted, 3.9 seconds are needed and are far longer than 2.1 seconds in the embodiment of the application.
In addition, the execution sequence in the embodiment of the present application may also be implemented in various other manners, for example, the subtask E and the subtask D are executed in parallel, the subtask a, the subtask B, and the subtask C are executed in series, and then executed in parallel with the subtask D and the subtask E. In the embodiment of the application, the total processing time of the plurality of subtasks can be reduced by determining the execution sequence of the plurality of subtasks according to the historical processing time, so that the response speed of the access request is improved.
And S204, calling a plurality of subtasks according to the execution sequence to execute the corresponding tasks, and obtaining the execution result corresponding to each subtask.
In the embodiment of the present application, each subtask obtains one execution result after the processing is finished, for example, in fig. 3, 5 execution results are obtained correspondingly for the subtasks a to E.
And S205, sending the execution result to the request platform.
The execution results of each subtask can be collected, and the obtained execution results are returned to the request platform.
In the embodiment of the application, the multiple subtasks are processed by determining the execution sequence of the multiple subtasks, so that the situation that a certain task is blocked in a link consuming longer time and the serial calling time is longer can be reduced, and the response speed of the current interface is further reduced. In the embodiment of the application, the execution sequence is determined according to the historical processing time length when the plurality of subtasks are processed each time, so that the processing modes of the plurality of subtasks can be updated in real time, the overall processing time length of the plurality of subtasks is reduced, the response speed of the current interface is increased, and the dynamic balance between the interface response efficiency and the thread resource consumption is realized.
In an optional embodiment, after S201, the method further includes: and if the request platform responds to the access request of the current interface for the first time, determining that the execution sequence among the plurality of subtasks is parallel execution.
In the case that the access request is the first access to the current interface, referring to fig. 1, when the first server 12 receives the access request from the request platform, it may be determined that the interface for processing the access request is the current interface deployed in the first server according to a parameter carried by the access request, then the request processing logic of the current interface is obtained, it is determined that the request processing includes a plurality of sub-logics, and the request processing task corresponding to the request for access request is decomposed into a plurality of sub-tasks by the plurality of sub-logics.
Illustratively, referring to FIG. 4, the request processing task is broken down into 5 subtasks, subtask A through subtask E,
when a plurality of subtasks are processed for the first time, the plurality of subtasks are executed in parallel, that is, each subtask is processed by using one thread correspondingly.
In this embodiment of the present application, the recording processing duration may be implemented by using a tangent plane + StopWatch (tangent plane statistics interface time consumption).
Further, if the next stage is entered after the execution of all the subtasks is completed, the asynchronous concurrent container is deployed to process the subtasks. Specifically, multiple subtasks may call multiple parameters of a non-current interface in the processing process, so as to obtain an execution result of each subtask, and after the execution result of each subtask is obtained, the execution results are used as variables, and the variables need to be processed by a next interface at a next stage before being returned to the request platform. Wherein, the execution result of each subtask needs to be used as the input parameter of the next interface, and then it is necessary to wait until all subtasks are completely executed, and based on this application scenario, an asynchronous concurrent container may be used, for example: and the complex Future puts a plurality of subtasks into the asynchronous concurrency container for concurrent processing.
In an alternative embodiment, if the next stage is entered when the processing of any of the plurality of subtasks is completed, a multi-thread concurrency container is deployed to process the plurality of subtasks. For example, if the access request is a request of a credit application, each subtask of the current interface corresponds to one wind control approval system, if the execution result of any subtask is rejected, the execution result is rejected, and if the execution result of all subtasks is approved, the execution result is rejected and if the execution result of all subtasks is approved, the execution result is approved. For such scenarios, a multi-threaded concurrency container, e.g., Completion Service, may be used, into which the decomposed sub-tasks are placed for processing.
And if the rejected result is obtained from the blocking queue currently, the execution result is directly obtained as rejection. And if the result of approval is obtained from the current blocking pair, continuously waiting for the completion of the processing of other subtasks, then traversing to obtain the corresponding execution result until all the execution results are approved, and returning the approval of the obtained execution result to the request platform.
In the embodiment of the application, different concurrent processing containers can be deployed through different application scenes of a plurality of subtasks, so that the processing speed of the plurality of subtasks can be increased.
In the embodiment of the present application, independent thread pools are used for different access requests, for example, if an access request is directed to a parallel acquisition variable corresponding to the first scenario, the access request is processed by using a thread pool X, and if an access request is directed to a wind-controlled approval corresponding to the second scenario, the access request is processed by using a thread pool Y, so that the current interface can simultaneously accept different types of access requests and perform different processing. And when the thread number of one thread pool reaches the bottleneck, the execution of tasks in other thread pools can not be influenced, and the mutual influence caused by thread accumulation is avoided.
In an alternative embodiment, before S202, if it is not the first time to respond to the access request of the requesting platform to the current interface, the method further includes: processing a plurality of second processing durations of the subtasks according to a preset rule to obtain a target duration; determining the currently acquired target time length as historical processing time length;
and the second processing time length is the processing time length obtained in the preset time.
Specifically, the preset rule refers to obtaining a plurality of second processing durations within a preset time before the current time of each subtask, for example, if the current time of receiving the access request is 15:00, the second processing duration may be the processing duration of the subtask counted from 15:00 in the previous day to the current time. In addition, the preset rule may also be that the target duration is obtained by periodically analyzing the second processing duration for a period of time every day. For example, 14:00 a day analyzes 14:00 to current 14: and a plurality of second processing time lengths between 00, and obtaining the target time length.
Further, the target time period is a statistical value of the plurality of second processing time periods, such as an average value, a median value, or a value with the highest occurrence probability.
In an alternative embodiment, the plurality of subtasks includes: a first subtask and at least two second subtasks, S203 includes:
and determining that the execution sequence between the first subtask and each second subtask is parallel execution and the execution sequence between at least two second subtasks is serial execution according to the historical processing time length corresponding to the first subtask and the historical processing time length corresponding to the second subtask.
The first subtask is a subtask with the longest historical processing time length in the plurality of subtasks, and the historical processing time length of the first subtask is larger than or equal to the sum of the historical processing time lengths of at least two second subtasks.
Specifically, the first subtask and the at least two second subtasks with the longest historical processing duration are called in parallel, and the historical processing duration of the first subtask is greater than or equal to the sum of the historical processing durations of the multiple second subtasks, so that the total processing duration of the multiple subtasks can be the processing duration of the first subtask, the total processing duration of the multiple subtasks can be reduced to the greatest extent, and the processing efficiency of the multiple subtasks can be improved.
Illustratively, referring to fig. 5, the historical processing time length of the subtask a is 0.3 seconds, the historical processing time length of the subtask B is 0.5 seconds, the historical processing time length of the subtask C is 0.4 seconds, the historical processing time length of the subtask D is 0.9 seconds, and the historical processing time length of the subtask E is 1.8 seconds. The first subtask is a subtask E, and the second subtask is a subtask A, a subtask B, and a subtask C. The historical processing time length of the first subtask is 1.8 seconds, and is greater than the sum of the historical processing time lengths of the second subtask by 1.2 seconds. The time taken for processing a plurality of subtasks is 1.8 seconds, which is the historical processing time of the first subtask.
Further, the plurality of subtasks further includes: the third subtask, S203, further includes: and determining the execution sequence of the third subtask and the first subtask to be parallel execution according to the historical processing time length of the third subtask and the historical processing time length of the first subtask.
The historical processing time length of the third subtask is longer than the historical processing time length of the second subtask, and the sum of the historical processing time lengths of the at least two second subtasks and the third subtask is longer than the historical processing time length of the first subtask.
Specifically, when the historical processing time of the third subtask is longer than the historical processing time of each second subtask, and the sum of the historical processing times of at least two second subtasks and the third subtask is longer than the historical processing time of the first subtask, the third subtask and the first subtask are executed in parallel, so that the processing efficiency of the multiple subtasks is improved.
Referring to fig. 5, the third subtask is a subtask D, the historical processing time duration of the third subtask is longer than the historical processing time duration of any one of the second subtasks (the subtask a, the subtask B, and the subtask C), and the sum (2.1 seconds) of the historical processing time durations of the plurality of second subtasks and the third subtask is longer than the historical processing time duration of the first subtask by 1.8 seconds.
In the embodiment of the present application, a specific manner for determining the execution sequence of the multiple subtasks is as follows: firstly, comparing the historical processing time length T (max) of the subtask with the longest historical processing time length in the plurality of subtasks with the sum T (min1) + T (min2) of the historical processing time lengths of the two subtasks with the shortest historical processing time length, judging whether the inequality T (max) > T (min1) + T (min2) is true, if so, continuously judging T (max)>The inequality of T (min1) + T (min2) + T (min3) is repeated until
Figure BDA0003636389270000091
If the inequality is not satisfied, executing n-1 subtasks with the shortest historical processing time in series, executing the n-1 subtasks and the subtask with the longest historical processing time in parallel, and executing the nth subtask and the subtask with the longest historical processing time in parallel, wherein n is an integer greater than or equal to 2, T (max) is the subtask with the longest historical processing time in the plurality of subtasks, and T (min i) is the subtask except the subtask with the longest historical processing time in the plurality of subtasks.
For example, the plurality of subtasks may be sorted in the order of how long the historical processing time is from, for example, subtask a (sort 1) < subtask C (sort 2) < subtask B (sort 3) < subtask D (sort 4) < subtask E (sort 5). And then determining the subtask E with the largest historical processing time (the largest sequence) as a first subtask, determining that the subtask E (1.8 seconds) > the subtask A (0.3 seconds) + the subtask C (0.4 seconds), and determining the minimum sequence to determine that the two subtasks (sequence 1 and sequence 2) are a second subtask, for example, the subtask A and the subtask C are the second subtask. Then, subtask E (1.8 seconds) > subtask A (0.3 seconds) + subtask C (0.4 seconds) + subtask B (0.5 seconds) is determined, and then subtask B is determined to be the second subtask. Then, it is determined that subtask E (1.8 seconds) < subtask A (0.3 seconds) + subtask C (0.4 seconds) + subtask B (0.5 seconds) + subtask D (0.9 seconds), and it is determined that subtask D is the third subtask.
Further, if the historical processing time length of the sub-task F existing in the decomposition is 0.6 second, the sub-task F and the sub-task D can be executed in series and executed in parallel with the sub-task E.
By adopting the method, the total historical processing time of the plurality of subtasks can be the processing time of the subtask with the longest time consumption, and the historical processing time of the plurality of subtasks in series is not higher than that of the subtask with the longest historical processing time, so that the response speed can be improved, and the resource consumption of the thread can be reduced.
Further, the average historical processing time length is obtained by periodically analyzing the historical processing time length of each subtask, and the dynamic determination of serial/parallel execution among the subtasks is carried out according to the average historical processing time length, so that the dynamic balance of the efficiency of processing each subtask by the current interface and the thread resource consumption is realized.
In an alternative embodiment, S204 includes: and if the subtask needs the target parameter under the condition of executing the corresponding task, calling a non-current interface to acquire the target parameter.
The non-current interface, such as the non-current interface a1 and the non-current interface a2, may call a parameter of the non-current interface according to its task processing logic in the processing process of each sub-task, such as the sub-task a calls a1 of the non-current interface a1 in the processing process, the sub-task B calls a1 of the non-current interface a1 in the processing process, and the sub-task C calls a2 of the non-current interface a2 in the processing process.
Illustratively, referring to FIG. 4, subtask A is processed using thread a, subtask B is processed using thread B, subtask C is processed using thread C, subtask D is processed using thread D, and subtask E is processed using thread E.
Further, in the process of each subtask, if the subtask needs to call a parameter from a non-current interface, for example, the non-current interface a1 calls the parameter a1 and/or the non-current interface a2 calls the parameter a2, the subtasks may also be executed in parallel. For example, subtask A calls parameter a1 of non-current interface A1 during processing, subtask B calls parameter a1 of non-current interface A1 during processing, and subtask C calls parameter a2 of non-current interface A2 during processing. The parameters of the current interface called by each subtask are independent, so that the influence is avoided, and the processing speed of all subtasks can be increased.
In the embodiment of the application, the execution results of each subtask are collected to obtain the execution results, and the execution results are returned to the request platform. After each subtask is processed, the processing time length of each subtask is recorded and stored, and when each non-current interface is called, and the response time length of the non-current interface is greater than a time length threshold value, the first server where the current interface is located can send the response time length to the terminal corresponding to the non-current interface, so that the staff corresponding to the non-current interface can optimize the non-current interface, and when the current interface is called by a next request platform, the response speed of the current interface can be increased.
Illustratively, referring to fig. 4, for example, if a request platform provides financial services to a user, and the user performs a loan through the request platform, the request platform sends a loan request to a current interface, and the current interface runs internal logic to process a plurality of subtasks corresponding to the loan request, such as an approval process, during the approval process, a parameter a1 of a non-current interface a1 and a parameter a2 of a non-current interface a2 need to be called, where the non-current interface a1 is provided by the financial regulatory agency m, and the non-current interface a2 is provided by the financial regulatory agency n, and when a response duration corresponding to calling the non-current interface a1 is greater than a duration threshold, the response duration is sent to the financial regulatory agency m, so that a staff of the financial regulatory agency m optimizes the non-current interface a 1.
Further, if the target parameter is obtained, the response duration of the non-current interface is obtained; if the response time length is greater than the preset threshold value, sending the response time length to a terminal corresponding to the non-current interface;
if the sub-task a calls the parameter a1 of the non-current interface a1 in the processing process, the sub-task a can be processed according to the parameter a 1. In the embodiment of the present application, the non-current interface refers to a downstream interface of the current interface.
In the embodiment of the application, when the parameter of the non-current interface is successfully called, the response duration of the non-current interface is recorded.
And the response duration is used for indicating the terminal to update the non-current interface. In addition, the response time duration returned to the terminal may also be an average response time duration within a time duration.
In the embodiment of the application, the response time of the non-current interface also affects the processing time of the subtask, so that the response speed of the current interface can be improved by updating the non-current interface.
Further, if the target parameter is not acquired, the subtask is processed according to a preset fusing mode or a degrading mode.
When abnormal conditions such as call failure or overtime exist in the non-current interface, the sub-task can be fused and degraded by using components of the call interface such as resilience4j and Sentinel. For example, when a non-current interface is called to obtain a parameter S, because the non-current interface has an abnormal condition such as call failure or timeout, the preset degradation process is to return a parameter k, and then the returned parameter k is used to process a corresponding subtask. When the call failure or overtime of the non-current interface is detected, the non-current interface is quickly fused or degraded, and the problem that a large number of threads are blocked on the call of the non-current interface to influence the response speed of the current interface when the non-current interface is abnormal can be avoided.
In this embodiment of the present application, in calling a non-current interface, parameters from the non-current interface may be similarly placed into the concurrent asynchronous concurrent container or the multi-threaded concurrent container for parallel execution.
In the embodiment of the application, each time a plurality of subtasks are processed, the processing time length corresponding to the subtask is recorded and stored in the database, so that the execution sequence is determined when the plurality of subtasks are processed next time, and the dynamic balance between the efficiency of processing each subtask by the current interface and the thread resource consumption is realized.
In the embodiment of the application, when the request processing task is processed for the first time, the request processing task is decomposed into a plurality of sub-tasks which have no strong dependency relationship before and after, and the plurality of sub-tasks which are originally executed in series are changed into parallel execution, so that the processing time of the plurality of sub-tasks can be reduced, and the response speed of the current interface can be greatly improved. Then, the different concurrent containers are used for carrying out targeted processing on the multiple subtasks of different application scenes, and the high adaptability can further reduce the processing time of the multiple subtasks. The processing time of each subtask is recorded, so that data support can be provided for determining the execution sequence of each subtask when a plurality of subtasks are processed again, and the dynamic balance between the efficiency of processing each subtask by the current interface and the thread resource consumption is realized. And finally, by recording the response time of the non-current interface, the worker can be prompted to optimize the non-current interface, so that the response time of the non-current interface is reduced, and the processing time of the corresponding subtask is further reduced.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The embodiment of the application provides a data processing device which can be integrated on an electronic device such as a server. As shown in fig. 6, the data processing device 60 includes: a response module 61, an acquisition module 62, a determination module 63, an execution module 64, and a sending module 65. Wherein:
the response module 61 is configured to determine, in response to an access request of the request platform to the current interface, multiple subtasks corresponding to the current interface, where call relationships among the multiple subtasks are independent of each other;
the obtaining module 62 is configured to obtain a historical processing duration corresponding to each subtask, where the historical processing duration of each subtask is obtained according to a plurality of first processing durations of the subtask, and the first processing duration is obtained within a preset historical time period;
a determining module 63, configured to determine an execution order among the multiple subtasks according to the historical processing time length;
the execution module 64 is configured to invoke a plurality of subtasks to execute corresponding tasks according to the execution sequence, so as to obtain an execution result corresponding to each subtask;
and a sending module 65, configured to send the execution result to the request platform.
In a possible embodiment, the data processing device 60 further comprises:
processing module (not shown): the processing method comprises the steps of processing a plurality of second processing durations of the subtasks according to a preset rule before obtaining a historical processing duration corresponding to each subtask to obtain a target duration, wherein the second processing durations are the processing durations obtained within a preset time;
the obtaining module 62 is specifically configured to: and determining the currently acquired target time length as the historical processing time length.
In one possible embodiment, the plurality of subtasks includes: the determining module 63 is specifically configured to: and determining that the execution sequence between the first subtask and each second subtask is parallel execution and the execution sequence between at least two second subtasks is serial execution according to the historical processing time length corresponding to the first subtask and the historical processing time length corresponding to the second subtask, wherein the first subtask is a subtask with the longest historical processing time length in the plurality of subtasks, and the historical processing time length of the first subtask is greater than or equal to the sum of the historical processing time lengths of the at least two second subtasks.
In one possible embodiment, the plurality of subtasks further includes: the third subtask, determining module 63 is further configured to:
and determining the execution sequence of the third subtask and the first subtask to be parallel execution according to the historical processing time length of the third subtask and the historical processing time length of the first subtask, wherein the historical processing time length of the third subtask is greater than the historical processing time length of the second subtask, and the sum of the historical processing time lengths of the plurality of second subtasks and the third subtasks is greater than the historical processing time length of the first subtask.
In a possible embodiment, the data processing device 60 further comprises:
and a calling module (not shown) specifically configured to call a non-current interface to obtain a target parameter if the target parameter is required when the subtask executes the corresponding task before the corresponding task is executed by calling the multiple subtasks according to the execution sequence and the execution result corresponding to each subtask is obtained.
In a possible embodiment, the data processing device 60 further comprises:
a parameter obtaining module (not shown), specifically configured to, after the plurality of subtasks are called according to the execution sequence to execute the corresponding task and an execution result corresponding to each subtask is obtained, obtain a response duration of a non-current interface if a target parameter is obtained;
and a duration sending module (not shown) specifically configured to send a response duration to a terminal corresponding to the non-current interface if the response duration is greater than a preset threshold, where the response duration is used to instruct the terminal to update the non-current interface.
In one possible embodiment, the processing module is further configured to: and if the target parameters are not acquired, processing the subtasks according to a preset fusing mode or a degradation mode.
In one possible embodiment, the determining module 63 is further configured to: and if the access request of the request platform to the current interface is responded for the first time, determining that the execution sequence among the multiple subtasks is parallel execution.
The apparatus provided in the embodiment of the present application may be used to execute the method in the embodiments shown in fig. 2 and fig. 4, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the processing module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a function of the processing module may be called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The processes or functions according to the embodiments of the present application are generated in whole or in part when the computer instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Fig. 7 is a schematic structural diagram of an electronic device 80 according to an embodiment of the present application. As shown in fig. 7, the electronic device may include: a processor 71, a memory 72, a communication interface 73, and a system bus 74. The memory 72 and the communication interface 73 are connected to the processor 71 through the system bus 74 and perform communication with each other, the memory 72 is used for storing instructions, the communication interface 73 is used for communicating with other devices, and the processor 71 is used for calling the instructions in the memory to execute the scheme of the above-mentioned data processing method embodiment.
The system bus 74 mentioned in fig. 7 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus 74 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface 73 is used to enable communication between the database access device and other devices (e.g., clients, read-write libraries, and read-only libraries).
The Memory 72 may include a Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor 71 may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program runs on an electronic device, the electronic device is enabled to execute the data processing method according to any one of the above method embodiments.
The embodiment of the present application further provides a chip for executing the instruction, where the chip is used to execute the data processing method according to any of the above method embodiments.
Embodiments of the present application further provide a computer program product, which includes a computer program, where the computer program is stored in a computer-readable storage medium, and at least one processor can read the computer program from the computer-readable storage medium, and when the computer program is executed by the at least one processor, the at least one processor can implement the data processing method of any one of the above method embodiments.
In this application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship; in the formula, the character "/" indicates that the preceding and following related objects are in a relationship of "division". "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application. In the embodiment of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A data processing method, comprising:
responding to an access request of a request platform to a current interface, and determining a plurality of subtasks corresponding to the current interface, wherein the calling relations among the subtasks are mutually independent;
acquiring historical processing time length corresponding to each subtask, wherein the historical processing time length of each subtask is obtained according to a plurality of first processing time lengths of the subtask, and the first processing time length is obtained within a preset historical time period;
determining an execution sequence among the plurality of subtasks according to the historical processing duration;
calling the multiple subtasks to execute corresponding tasks according to the execution sequence to obtain an execution result corresponding to each subtask;
and sending the execution result to the request platform.
2. The method according to claim 1, wherein before the obtaining of the historical processing time length corresponding to each subtask, the method further comprises:
processing a plurality of second processing durations of the subtasks according to a preset rule to obtain a target duration, wherein the second processing durations are processing durations obtained within a preset time;
the obtaining of the historical processing time corresponding to each subtask includes:
and determining the currently acquired target time length as the historical processing time length.
3. The method of claim 1, wherein the plurality of subtasks comprises: the determining of the execution sequence between the at least two subtasks according to the historical processing time length comprises:
and determining that the execution sequence of the first subtask and each second subtask is parallel execution and the execution sequence between the at least two second subtasks is serial execution according to the historical processing time length corresponding to the first subtask and the historical processing time length corresponding to the second subtask, wherein the first subtask is a subtask with the longest historical processing time length in the plurality of subtasks, and the historical processing time length of the first subtask is greater than or equal to the sum of the historical processing time lengths of the at least two second subtasks.
4. The method of claim 3, wherein the plurality of subtasks further comprises: a third subtask, wherein the determining of the execution sequence among the plurality of subtasks according to the historical processing time duration further includes:
and determining that the execution sequence of the third subtask and the first subtask is parallel execution according to the historical processing time length of the third subtask and the historical processing time length of the first subtask, wherein the historical processing time length of the third subtask is longer than the historical processing time length of the second subtask, and the sum of the historical processing time lengths of the plurality of second subtasks and the third subtask is longer than the historical processing time length of the first subtask.
5. The method according to any one of claims 1 to 4, wherein before the invoking the plurality of subtasks according to the execution order to execute the corresponding tasks and obtaining the execution result corresponding to each subtask, the method further comprises:
and if the subtasks need target parameters under the condition of executing corresponding tasks, calling a non-current interface to acquire the target parameters.
6. The method according to claim 5, wherein after the plurality of subtasks are called according to the execution sequence to execute the corresponding tasks and the execution result corresponding to each subtask is obtained, the method further comprises:
if the target parameter is obtained, obtaining the response time length of the non-current interface;
and if the response time length is greater than a preset threshold value, sending the response time length to a terminal corresponding to the non-current interface, wherein the response time length is used for indicating the terminal to update the non-current interface.
7. The method of claim 5, further comprising:
and if the target parameters are not acquired, processing the subtasks according to a preset fusing mode or a degrading mode.
8. The method according to any one of claims 1 to 4, further comprising:
and if the access request of the request platform to the current interface is responded for the first time, determining that the execution sequence among the plurality of subtasks is parallel execution.
9. A data processing apparatus, characterized by comprising:
the response module is used for responding to an access request of a request platform to a current interface and determining a plurality of subtasks corresponding to the current interface, wherein the calling relations among the subtasks are mutually independent;
the acquisition module is used for acquiring the historical processing time length corresponding to each subtask, wherein the historical processing time length of each subtask is obtained according to a plurality of first processing time lengths of the subtask, and the first processing time lengths are the processing time lengths obtained in a preset historical time period;
the determining module is used for determining the execution sequence among the plurality of subtasks according to the historical processing time length;
the execution module is used for calling the plurality of subtasks to execute the corresponding tasks according to the execution sequence to obtain the execution result corresponding to each subtask;
and the sending module is used for sending the execution result to the request platform.
10. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the data processing method according to any one of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when run on an electronic device, causes the electronic device to perform the data processing method according to any one of claims 1 to 8.
CN202210503729.5A 2022-05-10 2022-05-10 Data processing method, device, equipment and storage medium Active CN114880120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210503729.5A CN114880120B (en) 2022-05-10 2022-05-10 Data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210503729.5A CN114880120B (en) 2022-05-10 2022-05-10 Data processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114880120A true CN114880120A (en) 2022-08-09
CN114880120B CN114880120B (en) 2024-07-23

Family

ID=82676266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210503729.5A Active CN114880120B (en) 2022-05-10 2022-05-10 Data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114880120B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239344A (en) * 2017-06-05 2017-10-10 厦门美柚信息科技有限公司 Distributed allocation method and system
CN108446176A (en) * 2018-02-07 2018-08-24 平安普惠企业管理有限公司 A kind of method for allocating tasks, computer readable storage medium and terminal device
CN112150035A (en) * 2020-10-13 2020-12-29 中国农业银行股份有限公司 Data processing method and device
US20200410791A1 (en) * 2019-06-25 2020-12-31 Scientia Potentia Est, LLC. Site supervisor system for construction sites
CN113326114A (en) * 2021-06-11 2021-08-31 深圳前海微众银行股份有限公司 Batch task processing method and device
CN113849284A (en) * 2021-08-19 2021-12-28 杭州逗酷软件科技有限公司 Task running method and device, storage medium and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239344A (en) * 2017-06-05 2017-10-10 厦门美柚信息科技有限公司 Distributed allocation method and system
CN108446176A (en) * 2018-02-07 2018-08-24 平安普惠企业管理有限公司 A kind of method for allocating tasks, computer readable storage medium and terminal device
US20200410791A1 (en) * 2019-06-25 2020-12-31 Scientia Potentia Est, LLC. Site supervisor system for construction sites
CN112150035A (en) * 2020-10-13 2020-12-29 中国农业银行股份有限公司 Data processing method and device
CN113326114A (en) * 2021-06-11 2021-08-31 深圳前海微众银行股份有限公司 Batch task processing method and device
CN113849284A (en) * 2021-08-19 2021-12-28 杭州逗酷软件科技有限公司 Task running method and device, storage medium and terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RADOSŁAW RUDEK 等: "Optimization of task processing on parallel processors with learning abilities", 《2013 18TH INTERNATIONAL CONFERENCE ON METHODS & MODELS IN AUTOMATION & ROBOTICS (MMAR)》, 25 November 2013 (2013-11-25), pages 544 - 547 *
许荣斌 等: "基于任务执行截止期限的有向无环图实时调度方法", 《计算机集成制造系统》, 12 April 2016 (2016-04-12), pages 455 - 464 *

Also Published As

Publication number Publication date
CN114880120B (en) 2024-07-23

Similar Documents

Publication Publication Date Title
CN108595157B (en) Block chain data processing method, device, equipment and storage medium
US9348677B2 (en) System and method for batch evaluation programs
US20150032806A1 (en) Load distribution in client server system
CN109598407B (en) Method and device for executing business process
US8869149B2 (en) Concurrency identification for processing of multistage workflows
CN112114973A (en) Data processing method and device
CN114519006A (en) Test method, device, equipment and storage medium
CN113723893A (en) Method and device for processing orders
CN117234734A (en) Acceleration card load balancing scheduling method and device, communication equipment and storage medium
CN114880120B (en) Data processing method, device, equipment and storage medium
CN111400043A (en) Transaction pool management method, device and storage medium
CN116701123A (en) Task early warning method, device, equipment, medium and program product
CN117118698A (en) Access flow limiting method, device and equipment of metadata server
CN116226134A (en) Method and device for writing data into file and data writing database
US20220027251A1 (en) System for monitoring activity in a process and method thereof
CN115150399A (en) Load balancing method, load balancing device, processing system and storage medium
CN115795342B (en) Method and device for classifying business scenes, storage medium and electronic equipment
US20120158651A1 (en) Configuration of asynchronous message processing in dataflow networks
CN117453376B (en) Control method, device, equipment and storage medium for high-throughput calculation
CN117742928B (en) Algorithm component execution scheduling method for federal learning
CN115934300B (en) Cloud computing platform inspection task scheduling method and system
US20230418657A1 (en) Runtime prediction for job management
US20230359490A1 (en) Device, system and method for scheduling job requests
US20220113973A1 (en) Dynamic rate limiting of operation executions for accounts
CN118055004A (en) Target node determining method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant