CN116126538A - Service processing method, device, equipment and storage medium - Google Patents

Service processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116126538A
CN116126538A CN202310134767.2A CN202310134767A CN116126538A CN 116126538 A CN116126538 A CN 116126538A CN 202310134767 A CN202310134767 A CN 202310134767A CN 116126538 A CN116126538 A CN 116126538A
Authority
CN
China
Prior art keywords
sub
service
services
execution
branches
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310134767.2A
Other languages
Chinese (zh)
Inventor
陈科杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN202310134767.2A priority Critical patent/CN116126538A/en
Publication of CN116126538A publication Critical patent/CN116126538A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Multi Processors (AREA)
  • Hardware Redundancy (AREA)

Abstract

The embodiment of the specification provides a service processing method, a device, equipment and a storage medium, and relates to the technical field of computers. The method comprises the following steps: when a call request for a target service is received, determining a plurality of sub-services of the target service, wherein the sub-services comprise a plurality of branches, and the branches are determined based on input parameters of the sub-services; determining the execution probability of each branch of each sub-service; selecting branches corresponding to the sub-services based on the determined execution probability of the branches of the sub-services, and executing the branches of the selected sub-services concurrently; and merging the execution results of the sub-services to determine the execution result of the target service.

Description

Service processing method, device, equipment and storage medium
This patent application is application number: 201910170430.0, filing date: the invention discloses a method, a device, equipment and a storage medium for processing business, which are applied for the division of China patent application of the name of 'business processing' on the date of 2019, 03 and 07.
Technical Field
The present document relates to the field of computer technologies, and in particular, to a service processing method, a service processing device, a service processing apparatus, and a computer readable storage medium.
Background
With the rapid development of internet technology, more and more people choose to develop services through a network, and provide various services to users. With the development of services, services to be provided by a service system are more and more complex, and higher requirements are put on the performance of internet services.
In one technical scheme, the parallel processing capability of the computer is utilized to concurrently execute service with unassociated business logic. However, in this solution, it is difficult to optimize the service performance of the service link by parallel processing for a service having a logical association between the upstream and downstream of the service link.
Disclosure of Invention
One or more embodiments of the present specification provide a service processing method. The method includes determining a plurality of sub-services of a target service upon receiving a call request to the target service. The sub-service includes a plurality of branches. The plurality of branches is determined based on input parameters of the sub-service. And selecting branches corresponding to the sub-services based on the execution probability of the branches. And concurrently executing the branches of the selected sub-services. And merging the execution results of the sub-services to determine the execution results of the target service.
One or more embodiments of the present specification provide a service processing apparatus. The apparatus includes a sub-service determining unit that determines a plurality of sub-services of a target service when a call request for the target service is received. The sub-service includes a plurality of branches. The plurality of branches is determined based on input parameters of the sub-service. The device also comprises a concurrency executing unit, which is used for selecting branches corresponding to the sub-services based on the execution probability of the branches of the sub-services. And concurrently executing the branches of the selected sub-services. The device also comprises a result processing unit, which is used for merging the execution results of the sub-services to determine the execution results of the target service.
One or more embodiments of the present specification provide a service processing apparatus. The apparatus includes a processor. The device further comprises a memory arranged to store computer executable instructions. The computer-executable instructions, when executed, cause the processor to determine a plurality of sub-services of a target service upon receiving a call request for the target service. The sub-service includes a plurality of branches. The plurality of branches is determined based on input parameters of the sub-service. And selecting branches corresponding to the sub-services based on the execution probability of the branches. And concurrently executing the branches of the selected sub-services. And merging the execution results of the sub-services to determine the execution results of the target service.
One or more embodiments of the present specification provide a storage medium. The storage medium is for storing computer-executable instructions. The computer-executable instructions, when executed by a processor, determine a plurality of sub-services of a target service upon receiving a call request for the target service. The sub-service includes a plurality of branches. The plurality of branches is determined based on input parameters of the sub-service. And selecting branches corresponding to the sub-services based on the execution probability of the branches. And concurrently executing the branches of the selected sub-services. And merging the execution results of the sub-services to determine the execution results of the target service.
Drawings
In order to more clearly illustrate the embodiments of the present document or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present document, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 shows a schematic block diagram of an application scenario of a service processing method provided according to some embodiments of the present specification;
FIG. 2 illustrates a flow diagram of a business processing method provided in accordance with some embodiments of the present description;
FIG. 3 illustrates a flow diagram provided in accordance with some embodiments of the present description for determining the probability of execution of each branch of each sub-service;
FIG. 4 is a flow diagram illustrating a process for merging execution results of respective sub-services according to some embodiments of the present disclosure;
FIG. 5 is a flow diagram of a business processing method according to other embodiments of the present disclosure;
fig. 6 shows a schematic block diagram of a service processing apparatus provided according to some embodiments of the present description;
FIG. 7 illustrates a schematic block diagram of a concurrency execution unit provided in accordance with some embodiments of the present specification;
FIG. 8 illustrates a schematic block diagram of an execution probability prediction unit provided in accordance with some embodiments of the present specification; and
fig. 9 shows a schematic block diagram of a service processing device provided in accordance with some embodiments of the present description;
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
Fig. 1 shows a schematic block diagram of an application scenario of a service processing method provided according to some embodiments of the present specification. Referring to fig. 1, the application scenario may include: at least one client 110 and a server 120. The client 110 communicates with the server 120 via a network 130. A call request for a target service is initiated on the client 110 to the server 120. After receiving the call request sent by the client 110, the server 120 executes the multiple sub-services of the target service, performs merging processing on the execution results of the sub-services, and returns the final execution result to the client 110.
It should be noted that, the client 110 may be a mobile phone, a tablet computer, a desktop computer, a portable notebook computer, a POS (Point Of sale) terminal, or the like. The server 120 may be a physical server comprising a separate host, or a virtual server carried by a cluster of hosts, or a cloud server. The network 130 may be a wired network or a wireless network, for example, the network 130 may be a public switched telephone network (Public Switched Telephone Network, PSTN) or the internet.
The steps in the service processing method in the exemplary embodiment of the present disclosure may be partially executed by the client 110, partially executed by the server 120, or entirely executed by the server 120, which is not particularly limited in the present invention.
A service processing method according to an exemplary embodiment of the present specification is described below with reference to fig. 2 in conjunction with the application scenario of fig. 1. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principle of the present invention, and the embodiments of the present invention are not limited in any way. Rather, embodiments of the invention may be applied to any scenario where applicable.
Fig. 2 shows a flow diagram of a service processing method provided according to some embodiments of the present specification, which may be applied to the server 120 in fig. 1. Referring to fig. 2, the service processing method includes steps S210 to S240, and the service processing method in the exemplary embodiment of fig. 2 is described in detail below.
Referring to fig. 2, in step S210, upon receiving a call request to a target service, a plurality of sub-services of the target service are determined, each of the sub-services including a plurality of branches, the plurality of branches being determined based on input parameters of the sub-services.
In an exemplary embodiment, the target service is a serial service link, such as an internet backend service, where the service link includes a plurality of sub-services or execution units, each sub-service is one or a group of relatively small and independent functional units or execution units, the plurality of sub-services together form an execution flow with an actual business meaning, and there is a logical dependency between the sub-services of the serial service link, that is, an input of a subsequent sub-service on the service link depends on an output of a previous sub-service, and there is no direct dependency between the sub-services. Further, each sub-service contains a plurality of branches determined by the input parameters of the sub-service.
In an example embodiment, the target service may be an order placing service or an insurance application service, or may be another appropriate service, such as a ticket purchasing service or a login service, which is not particularly limited in this specification. The following description will be given by taking an order service as an example.
The order placing service may include a plurality of sub-services or execution units, each sub-service including a plurality of branches, the plurality of branches being determined based on input parameters of the sub-service, for example, the order placing service for the target commodity may include: selecting commodity sub-service, confirming order sub-service, paying sub-service and the like, wherein the commodity sub-service can select business logic for color sizes, and each color and the corresponding size are branches, for example, black S code branches to black XL code branches, white S code branches to white XL code branches, red S code branches to red XL code branches; the order confirmation sub-service may select a certain business logic for the receiving addresses and the delivery modes, where each receiving address and the corresponding delivery mode are a branch, for example, a branch of holiday delivery of the first receiving address, workday delivery of the first receiving address, etc.; the branches of the pay sub-service include: branches such as a payment precious payment branch, a bank card payment branch, a balance payment branch and the like.
Further, in an example embodiment, the individual sub-services of the target service are organized into an ordered sub-service queue in logical order across the service link and the service queue is stored. For example, the sub-services may be ordered in a logical order over the entire service link, and the call interfaces of the sub-services may be stored in a linked list or array according to the results of the ordering, so as to call the sub-services through the linked list or array.
In step S220, the execution probability of each branch of each sub-service is determined.
In an example embodiment, historical execution data of each sub-service corresponding to a target service is acquired, and execution probabilities of each branch of each sub-service are determined based on the historical execution data of each sub-service of the target service. Specifically, the input parameters of each sub-service may be extracted from the historical execution data of the target service, the input parameters of each sub-service may be counted, and the execution probability of each branch of each sub-service may be determined based on the counted result.
For example, assuming that the target service is an order placing service for the target commodity, input parameters of each sub-service of the order placing service may be extracted from historical order data of the order placing service for the target commodity, for example, historical color information and size information may be extracted from the selected commodity sub-service; extracting historical goods receiving addresses and historical delivery mode information from the confirmed order sheet service; and extracting payment mode information and the like from the payment sub-service, counting the extracted historical color size information, the historical goods receiving address, the distribution mode information and the payment mode information, determining the execution probability of the sub-service under each branch of each sub-service, namely each input parameter, based on the counting result, for example, determining the execution times of each color size branch for selecting commodity sub-service based on the counting result, and taking the ratio of the execution times of each color size branch to the total execution times as the execution probability of each color size branch.
In step S230, a branch corresponding to each sub-service is selected based on the determined magnitude of the execution probability of each branch of each sub-service, and the branches of each selected sub-service are concurrently executed.
In an example embodiment, after obtaining the execution probability of each branch of each sub-service, a branch corresponding to each sub-service is selected based on the determined magnitude of the execution probability of each branch of each sub-service, and the branches of each selected sub-service are executed concurrently. For example, sub-service 1 has three branches, namely branch 1, branch 2, and branch 3, resulting in sub-service 1 having a probability of 15% for execution of branch 1, 60% for execution of branch 2, and 25% for execution of branch 3; the sub-service 2 has three branches, namely, branch 1, branch 2 and branch 3, and the execution probability of the branch 1 of the sub-service 2 is 70%, the execution probability of the branch 2 is 20%, and the execution probability of the branch 3 is 10%, so that the branch 2 of the sub-service 1 and the branch 1 of the sub-service 2 are preferentially selected, and the selected branch 2 of the sub-service 1 and the branch 1 of the sub-service 2 are concurrently executed.
Further, in an example embodiment, a branch queue for each sub-service is generated based on the determined execution probabilities for each branch for each sub-service; ordering each branch in the branch queue of each sub-service based on the magnitude of the execution probability of each branch; and selecting corresponding branches from the branch queues of the sub-services in sequence according to the order of the execution probability from high to low. For example, sorting each branch in the branch queue of the sub-service 1 to obtain a sorted branch queue { branch 2 (60%); branch 3 (25%); branch 1 (15%) }, ordering each branch in the branch queue of sub-service 2 to obtain an ordered branch queue { branch 1 (70%); branch 2 (20%); branch 1 (10%) }, selecting corresponding branches from the branch queues of sub-service 1 and sub-service 2 in turn, and concurrently executing the selected corresponding branches.
In step S240, the execution results of the sub-services are combined to determine the execution result of the target service.
In an example embodiment, the execution results of the sub-services are ordered according to the logic execution sequence of the sub-services, and the ordered execution result of the previous sub-service is sequentially matched with the input parameters of the executed branch of the next sub-service until all the sub-services are matched. And if the input parameters of the corresponding branches are not matched, the execution result of the previous sub-service is used as the input parameters of the next sub-service, the next sub-service is re-executed, and after the next sub-service is re-executed, the execution result of the next sub-service is matched with the input parameters of the branches of the next sub-service after the execution of the next sub-service is completed until all the sub-services are matched. And after all the sub-services are matched, taking the execution result of the last sub-service as the execution result after the merging processing.
For example, when the target service is an order placing service, the three sub-services are ordered according to the logic sequence of selecting commodity sub-services, confirming order sub-services and payment sub-services, the execution results of selecting commodity sub-services are matched with the input results of confirming order sub-services, if the input results of confirming order sub-services are matched, the execution results of confirming order sub-services are matched with the input results of payment sub-services, and if the input results of payment sub-services are matched, the execution results of payment sub-services are used as the final target service execution results. If the input result of the confirmation order sub-service is not matched, the commodity type and the color size which are the execution result of the commodity sub-service are selected as the input result of the confirmation order sub-service, the confirmation order sub-service is re-executed, and the next round of matching is performed based on the re-executed execution result. After the commodity sub-service is selected, the order sub-service is confirmed, and the payment sub-service is matched, the execution result of the payment sub-service is deducted from a bank card, for example, to obtain the corresponding commodity amount as the execution result of the target service.
According to the business processing method in the example embodiment of fig. 2, on one hand, a plurality of sub-services of a target service are determined, the sub-services comprise a plurality of branches, and a service link of the target service with logical association between the upstream and downstream can be split into a plurality of sub-services, so that performance optimization on the target service can be facilitated; on the other hand, the execution probability of each branch of each sub-service of the target service is predicted, and the selected branch of each sub-service is executed concurrently based on the execution probability of each branch, so that rollback time caused by branch execution errors can be reduced, and the calling efficiency of the back-end service can be optimized; on the other hand, the execution results of all the sub-services are combined, so that the influence of branch prediction errors on the execution results can be avoided, and the accuracy of the execution results of the target services is improved.
Further, in some example embodiments, the branches of the selected respective sub-services are submitted to a concurrent task pool, which sequentially and concurrently executes the branches of the respective sub-services in order of the execution probability of the branches of the respective sub-services from high to low. For example, the concurrency task pool may be a thread pool, and branches of each selected sub-service are submitted to the thread pool, and the thread pool preferentially selects branches with high execution probability from each sub-service to execute concurrency. The concurrent task pool can provide a context environment for sub-service operation when the sub-service is running, wherein the context environment comprises function dependence, basic parameters and the like; in addition, the sub-service operation environment can be isolated, an independent transaction execution space is provided for the sub-service, and the mutual influence of execution results is avoided. The data processing efficiency can be further improved by executing the branches of each sub-service concurrently by the concurrent task pools.
Further, in an example embodiment, after executing the sub-service, the sub-service is marked as a mergeable state; and when all the sub-services are in a combinable state, combining the execution results of the sub-services.
Fig. 3 illustrates a flow diagram provided in accordance with some embodiments of the present description for determining the probability of execution of each branch of each sub-service.
Referring to fig. 3, in step S310, history execution data of each branch of each sub-service is acquired.
In an example embodiment, historical execution data for each branch of each sub-service is obtained from a server side. For example, the target service is set as an order placing service for the target commodity, the sub-services of the order placing service include a commodity selecting sub-service, an order confirming sub-service, a payment sub-service and the like, and input parameters of each sub-service of the order placing service are extracted from historical order data of the order placing service, for example, historical color information and size information corresponding to the target commodity are extracted from the commodity selecting sub-service; extracting a historical goods receiving address and historical delivery mode information corresponding to the target goods from the confirmed bill service; and extracting payment method information corresponding to the target commodity from the payment sub-service.
In step S320, the number of executions of each branch of each sub-service is counted based on the history execution data.
In an example embodiment, after the history execution data of each branch of each sub-service is acquired, the number of executions of each branch of each sub-service is counted based on the history execution data. For example, assuming the target service as an order placing service for the target commodity, the order placing service may include: selecting commodity sub-service, confirming order sub-service and payment sub-service, and counting the execution times of each color size branch based on the extracted historical color size information; counting the execution times of the delivery mode branches of each receiving address based on the historical receiving address and the delivery mode information; and counting the execution times of each payment mode branch based on the historical payment mode information.
In step S330, a ratio of the number of executions of each branch to the total number of executions is determined based on the statistical result, and the ratio is taken as the execution probability of the corresponding branch.
In an example embodiment, after counting the number of executions of each branch of each sub-service, a ratio of the number of executions of each branch to the total number of executions is determined, and the ratio of the number of executions of each branch to the total number of executions is taken as the execution probability of each branch. For example, assuming that the number of times of execution of the black S code branch is 10, the number of times of execution of the black L code branch is 50, the number of times of execution of the black XL code branch is 10, the number of times of execution of the white S code branch is 10, the number of times of execution of the white L code branch is 20, and the total number of times of execution is 100, the probability of execution of the black S code branch is 10%, the probability of execution of the black L code branch is 50%, the probability of execution of the black XL code branch is 10%, the probability of execution of the white S code branch is 10%, and the probability of execution of the white L code branch is 20%. Further, the execution times of the corresponding branches and the ratio of the execution times to the total execution times may be dynamically updated based on the execution results of the respective sub-services.
Further, in an example embodiment, the probability of execution of each branch of each sub-service may be determined by a statistical model. For example, taking the color size selection sub-service as an example, let the total execution number of times be n, the execution number of times be a for the black S code branch, the execution number of times be b for the black L code branch, the execution number of times be c for the black XL code branch, the execution number of times be d for the white S code branch, the execution number of times be e for the white L code branch, the execution probability of the black S code branch be f1=a/n, the execution probability of the black L code branch be f2=b/n, the execution probability of the black XL code branch be f3=c/n, the execution probability of the white S code branch be f4=d/n, and the execution probability of the white L code branch be f5=e/n. After the execution of the color size selection sub-service is completed, the execution probability of the corresponding branch is updated based on the color size parameter, which is the input parameter of the color size selection sub-service, for example, after the execution of the black S code branch, the execution probability f1= (a+1)/(n+1) of the black S code branch.
Furthermore, in an example embodiment, the execution probability of each branch of each sub-service may also be determined by a statistically based probabilistic predictive model, for example, a historical execution characteristic, such as a historical execution number characteristic, of each branch of each sub-service may be extracted, the probabilistic predictive model may be trained based on the historical execution characteristic, and the execution probability of each branch of each sub-service may be determined by the trained probabilistic predictive model.
Further, in an example embodiment, parameters of the probabilistic predictive model may also be adjusted based on the execution results of the respective sub-services to optimize the probabilistic predictive model, for example, in some embodiments, if the probabilistic predictive model predicts correctly, i.e., matches, the input parameters of the corresponding branch during the result merging, the matched result is fed back to the probabilistic predictive model, and parameters of the probabilistic predictive model are adjusted based on the matched result.
The probabilistic predictive model may be a bayesian model, a support vector machine model, or a decision tree model, or may be another appropriate statistical model, such as a neural network model or a logistic regression model, which is not particularly limited in this specification.
Fig. 4 is a flow chart illustrating a process of merging execution results of respective sub-services according to some embodiments of the present disclosure.
Referring to fig. 4, in step S410, the execution results of the sub-services are ordered, and the execution result of the last sub-service after the ordering is matched with the input parameters of the branch of the current sub-service after the execution is completed.
In an example embodiment, after each sub-service is executed once, the execution results of the branches of each sub-service that have been executed are ordered according to the logical order of each sub-service, and the execution result of the last sub-service after the ordering is matched with the branch input parameters that have been executed by the current sub-service. For example, the target service is set as an order placing service, the sub-services of the order placing service include a commodity selecting sub-service, an order confirming sub-service, a payment sub-service and the like, the input parameters for confirming that the order placing sub-service has been executed are first receiving address workday delivery and first receiving address holiday delivery, the execution result of the commodity selecting sub-service is matched with the input parameters for confirming that the order placing sub-service has been executed, for example, the execution result of the commodity selecting sub-service includes commodity information and default address delivery mode information, and the default address delivery mode information is matched with the receiving address delivery mode parameters which are input parameters for confirming the order placing sub-service.
Further, in an example embodiment, an execution result queue is generated based on execution results of one or more branches of respective sub-services of the target service; the execution results of the respective sub-services in the execution result queue are ordered based on the logical order of the respective sub-services.
In step S420, if the input parameters of the corresponding branch are matched, the execution result of the matched corresponding branch is matched with the input parameters of the executed branch of the next sub-service until the matching of all sub-services is completed. For example, when the input result of the order sub-service is confirmed by matching the execution result of the selected commodity sub-service, if the input result of the order sub-service is confirmed, the execution result of the order sub-service is confirmed to be matched with the input result of the payment sub-service, and if the input result of the payment sub-service is matched, the execution result of the payment sub-service is used as the final execution result of the target service.
In step S430, if the input parameters of the corresponding branches are not matched, the execution result of the last service is used as the input parameters of the current sub-service, and the current sub-service is re-executed. For example, if the input result of the confirmation order sub-service is not matched, the execution result of the commodity sub-service, namely the commodity type and the color size, is selected as the input result of the confirmation order sub-service, the confirmation order sub-service is re-executed, and the next round of matching is performed based on the re-executed execution result.
In step S440, after re-execution, the execution result of the current sub-service is matched with the input parameters of the executed branches of the next sub-service until all sub-services are matched.
In step S450, after all the sub-services are matched, the execution result of the last sub-service is taken as the execution result after the merging process. For example, after the commodity sub-service is selected, the order sub-service is confirmed, and the payment sub-service is matched, the execution result of the payment sub-service is deducted from the bank card, for example, the corresponding commodity amount is taken as the execution result of the target service.
Fig. 5 shows a flow diagram of a service processing method according to other embodiments of the present disclosure.
Referring to fig. 5, in step S510, a service link of a serial target service is split into independent sub-services or sub-execution units according to a predetermined granularity, where the predetermined granularity may be a service unit with an independent function, and each sub-service has only a logical relationship of data states and no direct dependency. By splitting the serial target service into a plurality of sub-services, the sub-services of the target service can be executed concurrently, and thus the efficiency of calling the target service can be improved.
In step S520, branches of each sub-service are determined based on the input parameters of each sub-service, for example, output parameters of each type of each sub-service are taken as branches of the sub-service, for example, commodity sub-service is selected, each color and corresponding size are one branch, for example, black S code branches to black XL code branches, white S code branches to white XL code branches, red S code branches to red XL code branches
In step S530, a branch queue is generated based on the branches of the respective sub-services, the branch queue of the sub-service 2 including the branch 1, the branch 2, and the branch 3, and the branch queue of the sub-service 3 including the branch 1, the branch 2, and the branch 3.
In step S540, the execution probability of each branch of each sub-service is predicted based on the probability prediction model example.
In an example embodiment, the execution probabilities of the respective branches of the sub-service are predicted by a probability prediction model, for example, the execution probability of branch 1 of sub-service 2 is 70%, the execution probability of branch 2 is 20%, the execution probability of branch 3 is 10%, the execution probability of branch 1 of sub-service 3 is 15%, the execution probability of branch 2 is 60%, and the execution probability of branch 3 is 25%. Further, in an example embodiment, branches of a sub-service are ordered according to the magnitude of the execution probabilities of the respective branches of the sub-service.
Further, parameters of the probability model may be adjusted based on a prediction result of the probability prediction model, and a current execution probability of each branch of each sub-service may be predicted based on the adjusted probability prediction model.
The probabilistic predictive model may be a bayesian model, a support vector machine model, or a decision tree model, or may be another appropriate statistical model or a machine learning model, such as a neural network model or a logistic regression model, which is not particularly limited in this specification.
In step S550, each branch of the sub-service after prediction is submitted to the concurrent task pool for concurrent execution, and the concurrent task pool sequentially and concurrently executes the branches of the sub-services according to the order of the execution probability of the branches of the sub-services from large to small. For example, the concurrency task pool may be a thread pool, and branches of each selected sub-service are submitted to the thread pool, and the thread pool preferentially selects branches with high execution probability from each sub-service to execute concurrency. The concurrent task pool can provide a context environment for sub-service operation when the sub-service is running, wherein the context environment comprises function dependence, basic parameters and the like; in addition, the sub-service operation environment can be isolated, an independent transaction execution space is provided for the sub-service, and the mutual influence of execution results is avoided. The data processing efficiency can be improved by executing the branches of each sub-service concurrently by the concurrent task pools. Further, during concurrent execution, according to the performance of the server and the computing resource, the high-probability branches are executed preferentially, and all branches of the sub-service do not need to be executed, so that the consumption of the computing resource is reduced, and the data processing efficiency is further improved.
In an example embodiment, the concurrent task pool fetches branch execution of the sub-service from a branch queue of the sub-service according to a load condition of the service system, ensures that a branch with the highest probability in branches of each sub-service must be executed to be completed, adds an execution result into an execution result queue after the branch execution of the sub-service is completed, and marks the sub-service with the completed branch execution as a mergeable state. When all sub-services are in a mergeable state, the sub-services of the entire target service are submitted for result merging.
In step S560, the merging process is performed on the execution results of the respective sub-services, and step S560 includes steps S562 to S568, and the following details of steps S562 to S568 are described.
In step S562, the execution results of the respective sub-services are ordered, for example, according to the logical execution order of the respective sub-services on the service link of the target service.
In step S564, the ordered execution result of the last sub-service is matched with the input parameters of the executed branch of the current sub-service, and if the input parameters of the corresponding branch are matched, the matched execution result of the corresponding branch is matched with the input parameters of the executed branch of the next sub-service until all sub-services are matched.
In step S566, if the input parameters of the corresponding branches are not matched, the execution result of the last service is used as the input parameters of the current sub-service, and the current sub-service is re-executed; after re-execution, matching the execution result of the current sub-service with the input parameters of the executed branch of the next sub-service until all sub-services are matched.
In step S568, after all the sub-services are matched, the execution result of the last sub-service is taken as the execution result after the merging process.
In addition, in step S570, the matching result in step S564 and/or step S566 may be further fed back to the probabilistic predictive model, and parameters of the probabilistic predictive model may be adjusted based on the fed back matching result, so as to improve the prediction accuracy of the probabilistic predictive model, avoid rollback consumption caused by re-executing erroneous judgment branches, and further improve the data processing efficiency.
In an example embodiment of the present specification, a service processing apparatus is also provided. Referring to fig. 6, the service processing apparatus 600 may include: a sub-service determining unit 610, an execution probability predicting unit 620, a concurrent executing unit 630, and a result processing unit 640. Wherein the sub-service determining unit 610 is configured to determine, when receiving a call request for a target service, a plurality of sub-services of the target service, where the sub-services include a plurality of branches, and the plurality of branches are determined based on input parameters of the sub-services; the execution probability prediction unit 620 is configured to determine an execution probability of each branch of each sub-service; the concurrency executing unit 630 is configured to select a branch corresponding to each sub-service based on the determined execution probability of each branch of each sub-service, and concurrently execute the selected branches of each sub-service; the result processing unit 640 is configured to combine the execution results of the sub-services to determine the execution result of the target service.
In some example embodiments of the present specification, referring to fig. 7, based on the foregoing scheme, the concurrency execution unit 630 includes: a branch queue generating unit 710, configured to generate a branch queue of each sub-service based on the determined execution probability of each branch of each sub-service; a branch ordering unit 720, configured to order each branch in the branch queues of each sub-service based on the magnitude of the execution probability of each branch; and a selecting unit 730, configured to sequentially select corresponding branches from the branch queues of the sub-services according to the order of the execution probability from the big to the small.
In some example embodiments of the present specification, based on the foregoing scheme, the result processing unit 640 includes: a marking unit configured to mark the sub-service as a mergeable state after the sub-service is executed; and the result merging unit is used for merging the execution results of all the sub-services when all the sub-services are in a mergeable state.
In some example embodiments of the present specification, based on the foregoing solution, the service processing apparatus 600 further includes: a result queue generating queue for generating an execution result queue based on the execution results of one or more branches of each of the sub-services of the target service before merging the execution results of each of the sub-services; and the result ordering unit is used for ordering the execution results of the sub-services in the execution result queue based on the logic sequence of the sub-services.
In some example embodiments of the present specification, based on the foregoing scheme, the result merging unit is configured to: matching the execution result of one sub-service in the execution result queue with the input parameters of the executed branch of the current sub-service; if the input parameters of the corresponding branches are matched, matching the matched execution results of the corresponding branches with the input parameters of the branches of the next sub-service after the execution is finished until all the sub-services are matched; if the input parameters of the corresponding branches are not matched, the execution result of the last service is used as the input parameters of the current sub-service, and the current sub-service is re-executed; after re-execution, matching the execution result of the current sub-service with the input parameters of the executed branch of the next sub-service until all sub-services are matched; and after all the sub-services are matched, taking the execution result of the last sub-service as the execution result after the merging processing.
In some example embodiments of the present specification, based on the foregoing solution, the service processing apparatus further includes: and the feedback unit is used for updating the execution probability of the corresponding branch based on the matched result if the input parameters of the corresponding branch are matched.
In some example embodiments of the present specification, referring to fig. 8, based on the foregoing scheme, the execution probability prediction unit 620 includes: a data acquisition unit 810 for acquiring a history execution data statistics unit 820 for each branch of each sub-service, for counting the execution times of each branch of each sub-service based on the history execution data; the probability determining unit 830 is configured to determine, based on the statistical result, a ratio of the execution times of each branch to the total execution times, and take the ratio as the execution probability of the corresponding branch.
In some example embodiments of the present specification, based on the foregoing solution, the concurrency execution unit includes: a task submitting unit, configured to submit branches of each selected sub-service to a concurrent task pool; and the execution unit is used for sequentially and concurrently executing the branches of the sub-services according to the order of the execution probability of the branches of the sub-services from big to small through the concurrent task pool.
In some example embodiments of the present specification, based on the foregoing scheme, the sub-service determining unit includes: a service splitting unit, configured to split the target service into a plurality of sub-services according to a predetermined granularity; and a branch determining unit configured to determine the plurality of branches of each of the sub-services based on a range of input parameters of each of the sub-services.
According to the service processing apparatus in the example embodiment of fig. 6, on one hand, a plurality of sub-services of a target service are determined, the sub-services include a plurality of branches, and a service link of the target service with logical association between the upstream and downstream can be split into a plurality of sub-services, so that performance optimization on the target service can be facilitated; on the other hand, the execution probability of each branch of each sub-service of the target service is predicted, and the selected branch of each sub-service is executed concurrently based on the execution probability of each branch, so that rollback time caused by branch execution errors can be reduced, and the calling efficiency of the back-end service can be optimized; on the other hand, the execution results of all the sub-services are combined, so that the influence of branch prediction errors on the execution results can be avoided, and the accuracy of the execution results of the target services is improved.
The service processing device provided in the embodiment of the present disclosure can implement each process in the foregoing method embodiment, and achieve the same functions and effects, which are not repeated here.
Further, the embodiment of the present specification also provides a service processing device, as shown in fig. 9.
The service processing device may be configured or configured differently to produce a relatively large variance, and may include one or more processors 901 and a memory 902, where the memory 902 may store one or more storage applications or data. Wherein the memory 902 may be transient storage or persistent storage. The application programs stored in the memory 902 may include one or more modules (not shown in the figures), each of which may include a series of computer executable instructions for use in a business processing device. Still further, the processor 901 may be provided in communication with a memory 902 for executing a series of computer executable instructions in the memory 902 on a business processing device. The traffic handling device may also include one or more power supplies 903, one or more wired or wireless network interfaces 904, one or more input output interfaces 905, one or more keyboards 906, and the like.
In a particular embodiment, a business processing device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer executable instructions for the business processing device, and configured to be executed by one or more processors, the one or more programs comprising computer executable instructions for: determining a plurality of sub-services of a target service when a call request to the target service is received, wherein the sub-services comprise a plurality of branches, and the branches are determined based on input parameters of the sub-services; determining the execution probability of each branch of each sub-service; selecting branches corresponding to the sub-services based on the determined execution probability of the branches of the sub-services, and executing the branches of the sub-services concurrently; and merging the execution results of the sub-services to determine the execution results of the target service.
Optionally, the computer-executable instructions, when executed, select a branch corresponding to each of the sub-services based on the determined magnitude of the probability of execution of the branch for each of the sub-services, comprising: generating a branch queue for each of the sub-services based on the determined execution probabilities for each branch of each of the sub-services; ordering each branch in the branch queue of each sub-service based on the magnitude of the execution probability of each branch; and selecting corresponding branches from the branch queues of the sub-services in sequence according to the order of the execution probability from high to low.
Optionally, the computer executable instructions, when executed, combine the execution results of each of the sub-services, including: after executing the sub-service, marking the sub-service as a mergeable state; and when all the sub-services are in a combinable state, combining the execution results of the sub-services.
Optionally, when executed, the computer executable instructions further comprise, before merging the execution results of the respective sub-services, the service processing method further comprising: generating an execution result queue based on execution results of one or more branches of each of the sub-services of the target service; and ordering the execution results of the sub-services in the execution result queue based on the logic sequence of the sub-services.
Optionally, the computer executable instructions, when executed, combine the execution results of each of the sub-services, including: matching the execution result of one sub-service in the execution result queue with the input parameters of the executed branch of the current sub-service; if the input parameters of the corresponding branches are matched, matching the matched execution results of the corresponding branches with the input parameters of the branches of the next sub-service after the execution is finished until all the sub-services are matched; if the input parameters of the corresponding branches are not matched, the execution result of the last service is used as the input parameters of the current sub-service, and the current sub-service is re-executed; after re-execution, matching the execution result of the current sub-service with the input parameters of the executed branch of the next sub-service until all sub-services are matched; and after all the sub-services are matched, taking the execution result of the last sub-service as the execution result after the merging processing.
Optionally, the business processing method further comprises, when executed, computer executable instructions: if the input parameters of the corresponding branches are not matched, feeding back the matched results to the probability prediction model; and adjusting parameters of the probability prediction model based on the matching result.
Optionally, the computer-executable instructions, when executed, determine the probability of execution of each branch of each of the sub-services comprises: acquiring historical execution data of each branch of each sub-service; counting the execution times of each branch of each sub-service based on the historical execution data; and determining the proportion of the execution times of each branch to the total execution times based on the statistical result, and taking the proportion as the execution probability of the corresponding branch.
Optionally, the computer-executable instructions, when executed, concurrently execute the selected branches of each of the sub-services, comprising: submitting the branches of the selected sub-services to a concurrent task pool; and sequentially and concurrently executing the branches of each sub-service according to the order of the execution probability of the branches of each sub-service from large to small through a concurrent task pool.
Optionally, the computer-executable instructions, when executed, determine a plurality of sub-services of the target service, comprising: splitting the target service into a plurality of sub-services according to a preset granularity; the plurality of branches for each of the sub-services is determined based on a range of input parameters for each of the sub-services.
The service processing device provided in the embodiment of the present disclosure can implement each process in the foregoing method embodiment, and achieve the same functions and effects, which are not repeated here.
In addition, the embodiments of the present disclosure further provide a storage medium, which is configured to store computer executable instructions, and in a specific embodiment, the storage medium may be a usb disk, an optical disc, a hard disk, etc., where the computer executable instructions stored in the storage medium can implement the following flow when executed by a processor: determining a plurality of sub-services of a target service when a call request to the target service is received, wherein the sub-services comprise a plurality of branches, and the branches are determined based on input parameters of the sub-services; determining the execution probability of each branch of each sub-service; selecting branches corresponding to the sub-services based on the determined execution probability of the branches of the sub-services, and executing the branches of the sub-services concurrently; and merging the execution results of the sub-services to determine the execution results of the target service.
Optionally, the computer executable instructions stored on the storage medium, when executed by the processor, select a branch corresponding to each of the sub-services based on the determined magnitude of the execution probability of each branch of each of the sub-services, comprising: generating a branch queue for each of the sub-services based on the determined execution probabilities for each branch of each of the sub-services; ordering each branch in the branch queue of each sub-service based on the magnitude of the execution probability of each branch; and selecting corresponding branches from the branch queues of the sub-services in sequence according to the order of the execution probability from high to low.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, combine the execution results of the sub-services, including: after executing the sub-service, marking the sub-service as a mergeable state; and when all the sub-services are in a combinable state, combining the execution results of the sub-services.
Optionally, the storage medium stores computer executable instructions that, when executed by the processor, further comprise, before merging the execution results of the respective sub-services, the service processing method: generating an execution result queue based on execution results of one or more branches of each of the sub-services of the target service; and ordering the execution results of the sub-services in the execution result queue based on the logic sequence of the sub-services.
Optionally, the computer executable instructions stored in the storage medium, when executed by the processor, combine the execution results of the sub-services, including: matching the execution result of one sub-service in the execution result queue with the input parameters of the executed branch of the current sub-service; if the input parameters of the corresponding branches are matched, matching the matched execution results of the corresponding branches with the input parameters of the branches of the next sub-service after the execution is finished until all the sub-services are matched; if the input parameters of the corresponding branches are not matched, the execution result of the last service is used as the input parameters of the current sub-service, and the current sub-service is re-executed; after re-execution, matching the execution result of the current sub-service with the input parameters of the executed branch of the next sub-service until all sub-services are matched; and after all the sub-services are matched, taking the execution result of the last sub-service as the execution result after the merging processing.
Optionally, the storage medium stores computer executable instructions that when executed by the processor, the service processing method further comprises: if the input parameters of the corresponding branches are not matched, feeding back the matched results to the probability prediction model; and adjusting parameters of the probability prediction model based on the matching result.
Optionally, the storage medium storing computer executable instructions that, when executed by the processor, determine an execution probability for each branch of each of the sub-services, comprising: acquiring historical execution data of each branch of each sub-service; counting the execution times of each branch of each sub-service based on the historical execution data; and determining the proportion of the execution times of each branch to the total execution times based on the statistical result, and taking the proportion as the execution probability of the corresponding branch.
Optionally, the storage medium storing computer executable instructions that, when executed by the processor, concurrently execute selected branches of each of the sub-services, comprising: submitting the branches of the selected sub-services to a concurrent task pool; and sequentially and concurrently executing the branches of each sub-service according to the order of the execution probability of the branches of each sub-service from large to small through a concurrent task pool.
Optionally, the storage medium storing computer executable instructions that, when executed by the processor, determine a plurality of sub-services of the target service, comprising: splitting the target service into a plurality of sub-services according to a preset granularity; the plurality of branches for each of the sub-services is determined based on a range of input parameters for each of the sub-services.
The computer readable storage medium provided in the embodiments of the present specification can implement the respective processes in the foregoing method embodiments and achieve the same functions and effects, and are not repeated here.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware embodiments when implemented in the specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is by way of example only and is not intended to limit the present disclosure. Various modifications and changes may occur to those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present document are intended to be included within the scope of the claims of the present document.

Claims (13)

1. A business processing method, comprising:
determining a plurality of sub-services of a target service when a call request to the target service is received, wherein the sub-services comprise a plurality of branches, and the branches are determined based on input parameters of the sub-services;
selecting branches corresponding to the sub-services based on the execution probability of the branches of the sub-services, and executing the branches of the sub-services;
and merging the execution results of the sub-services to determine the execution results of the target service.
2. The service processing method according to claim 1, before the selecting of the branch corresponding to each sub-service based on the magnitude of the execution probability of each branch of each sub-service, the method further comprising:
based on the historical execution data of each sub-service, the execution probability of each branch of each sub-service is determined.
3. The service processing method according to claim 1, wherein the selecting the branch corresponding to each sub-service based on the magnitude of the execution probability of each branch of each sub-service includes:
generating a branch queue of each sub-service based on the execution probability of each branch of each sub-service;
Ordering each branch in the branch queue of each sub-service based on the magnitude of the execution probability of each branch;
and selecting corresponding branches from the branch queues of the sub-services in sequence according to the order of the execution probability from high to low.
4. The service processing method according to claim 1, wherein the merging the execution results of the sub-services includes:
after executing the sub-service, marking the sub-service as a mergeable state;
and when all the sub-services are in a combinable state, combining the execution results of the sub-services.
5. The service processing method according to claim 4, wherein before merging the execution results of the respective sub-services, the service processing method further comprises:
generating an execution result queue based on execution results of one or more branches of each of the sub-services of the target service;
and ordering the execution results of the sub-services in the execution result queue based on the logic sequence of the sub-services.
6. The service processing method according to claim 5, wherein the merging the execution results of the sub-services includes:
Matching the execution result of one sub-service in the execution result queue with the input parameters of the executed branch of the current sub-service;
if the input parameters of the corresponding branches are matched, matching the matched execution results of the corresponding branches with the input parameters of the branches of the next sub-service after the execution is finished until all the sub-services are matched;
if the input parameters of the corresponding branches are not matched, the execution result of the last service is used as the input parameters of the current sub-service, and the current sub-service is re-executed;
after re-execution, matching the execution result of the current sub-service with the input parameters of the executed branch of the next sub-service until all sub-services are matched;
and after all the sub-services are matched, taking the execution result of the last sub-service as the execution result after the merging processing.
7. The service processing method according to claim 6, further comprising:
if the input parameters of the corresponding branches are matched, updating the execution probability of the corresponding branches based on the matched results.
8. The business processing method of claim 2, wherein the determining the execution probability of each branch of each sub-service based on the historic execution data of each sub-service comprises:
Acquiring historical execution data of each branch of each sub-service;
counting the execution times of each branch of each sub-service based on the historical execution data;
and determining the proportion of the execution times of each branch to the total execution times based on the statistical result, and taking the proportion as the execution probability of the corresponding branch.
9. The traffic processing method according to any one of claims 1 to 8, the concurrently executing the selected branches of the respective sub-services, comprising:
submitting the branches of the selected sub-services to a concurrent task pool;
and sequentially and concurrently executing the branches of each sub-service according to the order of the execution probability of the branches of each sub-service from large to small through a concurrent task pool.
10. The business processing method of claim 9, wherein the determining the plurality of sub-services of the target service comprises:
splitting the target service into a plurality of sub-services according to a preset granularity;
the plurality of branches for each of the sub-services is determined based on a range of input parameters for each of the sub-services.
11. A traffic processing apparatus comprising:
a sub-service determining unit, configured to determine, when a call request for a target service is received, a plurality of sub-services of the target service, where the sub-services include a plurality of branches, and the plurality of branches are determined based on input parameters of the sub-services;
The concurrency executing unit is used for selecting branches corresponding to the sub-services based on the execution probability of the branches of the sub-services and executing the branches of the selected sub-services concurrently;
and the result processing unit is used for merging the execution results of the sub-services to determine the execution result of the target service.
12. A service processing apparatus comprising: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to:
determining a plurality of sub-services of a target service when a call request to the target service is received, wherein the sub-services comprise a plurality of branches, and the branches are determined based on input parameters of the sub-services;
selecting branches corresponding to the sub-services based on the execution probability of the branches of the sub-services, and executing the branches of the sub-services;
and merging the execution results of the sub-services to determine the execution results of the target service.
13. A storage medium storing computer-executable instructions that when executed implement the following:
Determining a plurality of sub-services of a target service when a call request to the target service is received, wherein the sub-services comprise a plurality of branches, and the branches are determined based on input parameters of the sub-services;
selecting branches corresponding to the sub-services based on the execution probability of the branches of the sub-services, and executing the branches of the sub-services;
and merging the execution results of the sub-services to determine the execution results of the target service.
CN202310134767.2A 2019-03-07 2019-03-07 Service processing method, device, equipment and storage medium Pending CN116126538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310134767.2A CN116126538A (en) 2019-03-07 2019-03-07 Service processing method, device, equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910170430.0A CN109947564B (en) 2019-03-07 2019-03-07 Service processing method, device, equipment and storage medium
CN202310134767.2A CN116126538A (en) 2019-03-07 2019-03-07 Service processing method, device, equipment and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910170430.0A Division CN109947564B (en) 2019-03-07 2019-03-07 Service processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116126538A true CN116126538A (en) 2023-05-16

Family

ID=67009196

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310134767.2A Pending CN116126538A (en) 2019-03-07 2019-03-07 Service processing method, device, equipment and storage medium
CN201910170430.0A Active CN109947564B (en) 2019-03-07 2019-03-07 Service processing method, device, equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910170430.0A Active CN109947564B (en) 2019-03-07 2019-03-07 Service processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (2) CN116126538A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857802A (en) * 2020-07-15 2020-10-30 上海云轴信息科技有限公司 Method, system and equipment for merging request group integration
CN117992198B (en) * 2024-02-06 2024-06-14 广州翌拓软件开发有限公司 Task processing method and system for adaptive scheduling

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8182270B2 (en) * 2003-07-31 2012-05-22 Intellectual Reserve, Inc. Systems and methods for providing a dynamic continual improvement educational environment
KR101406693B1 (en) * 2008-07-02 2014-06-12 고쿠리츠다이가쿠호진 토쿄고교 다이가꾸 Execution time estimation method, execution time estimation program, and execution time estimation device
CN102117198B (en) * 2009-12-31 2015-07-15 上海芯豪微电子有限公司 Branch processing method
US8793675B2 (en) * 2010-12-24 2014-07-29 Intel Corporation Loop parallelization based on loop splitting or index array
US8909582B2 (en) * 2013-02-04 2014-12-09 Nec Corporation Hierarchical latent variable model estimation device, hierarchical latent variable model estimation method, and recording medium
EP3053026A4 (en) * 2013-10-04 2017-04-12 Intel Corporation Techniques for heterogeneous core assignment
CN105654326B (en) * 2014-11-14 2019-08-09 阿里巴巴集团控股有限公司 A kind of information processing system and method
CN104615412B (en) * 2015-02-10 2018-11-09 清华大学 The method and system of execution control stream based on triggering command structure
CN107798571B (en) * 2016-08-31 2019-08-30 阿里巴巴集团控股有限公司 Malice address/malice order identifying system, method and device
FR3063856B1 (en) * 2017-03-09 2019-04-26 Commissariat A L'energie Atomique Et Aux Energies Alternatives EMISSION / RECEPTION SYSTEM USING ORTHOGONAL-LINEAR JOINT MODULATION
US20190004802A1 (en) * 2017-06-29 2019-01-03 Intel Corporation Predictor for hard-to-predict branches
CN107819861A (en) * 2017-11-16 2018-03-20 中国银行股份有限公司 Business data processing method, apparatus and system
CN108280452A (en) * 2018-01-26 2018-07-13 深圳市唯特视科技有限公司 A kind of image, semantic label correction method based on parallel network framework
CN108734398B (en) * 2018-05-17 2020-11-20 恒生电子股份有限公司 Task flow synchronization control method and device, storage medium and electronic equipment
CN108958896A (en) * 2018-06-16 2018-12-07 温州职业技术学院 Multi-thread concurrent processing system and method
CN109101276B (en) * 2018-08-14 2020-05-05 阿里巴巴集团控股有限公司 Method for executing instruction in CPU

Also Published As

Publication number Publication date
CN109947564A (en) 2019-06-28
CN109947564B (en) 2023-04-11

Similar Documents

Publication Publication Date Title
US20200211106A1 (en) Method, apparatus, and device for training risk management models
US11176084B2 (en) SIMD instruction sorting pre-sorted source register's data elements into a first ascending order destination register and a second descending destination register
US10915706B2 (en) Sorting text report categories
CN107038042B (en) Service execution method and device
CN109447622B (en) Transaction type recommendation method and system and intelligent transaction terminal
CN109670784B (en) Method, device and system for informing waiting time
CN114202370A (en) Information recommendation method and device
EP3961384A1 (en) Automatic derivation of software engineering artifact attributes from product or service development concepts
CN109615130B (en) Method, device and system for regularly reminding business handling
CN110008018A (en) A kind of batch tasks processing method, device and equipment
CN110020427B (en) Policy determination method and device
WO2019191266A1 (en) Object classification method, apparatus, server, and storage medium
CN103189853A (en) Method and apparatus for providing efficient context classification
CN111882317A (en) Business processing system, readable storage medium and electronic device
CN109325810B (en) Recharge conversion improving method, electronic equipment and computer storage medium
US11720905B2 (en) Intelligent merchant onboarding
CN108920183B (en) Service decision method, device and equipment
CN109947564B (en) Service processing method, device, equipment and storage medium
CN115186151A (en) Resume screening method, device, equipment and storage medium
US20230273826A1 (en) Neural network scheduling method and apparatus, computer device, and readable storage medium
CN111159355A (en) Customer complaint order processing method and device
KR102104783B1 (en) Apparatus and method for providing information through analysis of movement patterns between stock prices
CN115904907A (en) Task processing method and device
CN111241395B (en) Recommendation method and device for authentication service
CN109903165B (en) Model merging method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination