CN113760262A - Task processing method, device, computer system and computer readable storage medium - Google Patents
Task processing method, device, computer system and computer readable storage medium Download PDFInfo
- Publication number
- CN113760262A CN113760262A CN202110085483.XA CN202110085483A CN113760262A CN 113760262 A CN113760262 A CN 113760262A CN 202110085483 A CN202110085483 A CN 202110085483A CN 113760262 A CN113760262 A CN 113760262A
- Authority
- CN
- China
- Prior art keywords
- task
- processed
- processing
- events
- executed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 90
- 238000004590 computer program Methods 0.000 claims abstract description 26
- 230000014509 gene expression Effects 0.000 claims description 68
- 238000000034 method Methods 0.000 claims description 51
- 230000015654 memory Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 108010001267 Protein Subunits Proteins 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/36—Software reuse
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/542—Event management; Broadcasting; Multicasting; Notifications
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Stored Programmes (AREA)
Abstract
The present disclosure provides a task processing method, a task processing apparatus, a computer system, a computer-readable storage medium, and a computer program product. The task processing method comprises the following steps: calling a task flow component; assembling at least one task to be processed into a task flow pipeline chain based on the task flow components, wherein the task flow pipeline chain comprises at least one chain node, and each chain node corresponds to one task to be processed; and processing the task to be processed in the task flow pipeline chain through the task flow component.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a task processing method, a task processing apparatus, a computer system, a computer-readable storage medium, and a computer program product.
Background
Business implementation in the computer field usually represents processing tasks, and therefore, for different business scenarios, it is inevitable to construct an appropriate task processing mode for each scenario.
In the process of implementing the concept disclosed herein, the inventor finds that in the related art, at least the following problem exists, the task processing mode usually depends strongly on the design mode of the service code, and the code level is difficult to directly reuse in different service scenarios, so that the service code needs to be specifically designed for each service scenario to implement task processing.
Disclosure of Invention
In view of the above, the present disclosure provides a task processing method, a task processing apparatus, a computer system, a computer-readable storage medium, and a computer program product.
One aspect of the present disclosure provides a task processing method, including: calling a task flow component; assembling at least one task to be processed into a task flow pipeline chain based on the task flow components, wherein the task flow pipeline chain comprises at least one chain node, and each chain node corresponds to one task to be processed; and processing the task to be processed in the task flow pipeline chain through the task flow component.
According to an embodiment of the present disclosure, the to-be-processed task includes a task event having a logical relationship, and processing the to-be-processed task in the task flow pipeline chain by the task flow component includes: acquiring a infix expression corresponding to the task to be processed; converting the infix expression into a suffix expression; and
and operating the suffix expression to realize the processing of the task to be processed.
According to an embodiment of the present disclosure, the to-be-processed task includes a plurality of target task events associated by target logical operators, and processing the to-be-processed task in the task flow pipeline chain by the task flow component includes: determining the execution modes of the target task events according to the target logical operator; under the condition that the target logical operator represents that the execution sequence of the target task events is not limited, determining the target task events to be a plurality of task events executed in parallel; and under the condition that the execution sequence of the target logical operators for representing the target task events is limited, determining the target task events to be a plurality of task events executed in series.
According to an embodiment of the present disclosure, processing, by the task flow component, a task to be processed in the task flow pipeline chain includes: under the condition that the task to be processed comprises a plurality of task events which are executed in parallel, a thread pool is obtained; and simultaneously processing the plurality of task events executed in parallel through the thread pool.
According to an embodiment of the present disclosure, processing, by the task flow component, a task to be processed in the task flow pipeline chain includes: under the condition that the task to be processed comprises a plurality of task events which are executed in series, the plurality of task events which are executed in series are processed respectively; and terminating subsequent processing of the plurality of task events executed in series if there is a task event execution failure.
According to an embodiment of the present disclosure, the suffix expression includes a plurality of elements, each of the elements being a logical operator or the task event for characterizing the logical relationship, and operating on the suffix expression includes: acquiring preset operation corresponding to a logical operator in the suffix expression; and traversing each element in the suffix expression in sequence and operating in combination with a first-in-first-out stack, comprising: directly stacking the elements under the condition that the elements are task events; and executing a preset operation corresponding to the logical operator under the condition that the element is the logical operator.
Another aspect of the present disclosure provides a task processing apparatus including: the calling module is used for calling the task flow component; the task flow assembly module is used for assembling at least one task to be processed into a task flow pipeline chain based on the task flow assembly, wherein the task flow pipeline chain comprises at least one chain node, and each chain node corresponds to one task to be processed; and the processing module is used for processing the tasks to be processed in the task flow pipeline chain through the task flow component.
Another aspect of the present disclosure provides a computer system comprising: one or more processors; a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the task processing method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the task processing method as described above when executed.
Another aspect of the present disclosure provides a computer program product comprising computer executable instructions for implementing the task processing method as described above when executed.
According to the embodiment of the disclosure, a calling task flow component is adopted; assembling at least one task to be processed into a task flow pipeline chain based on the task flow components, wherein the task flow pipeline chain comprises at least one chain node, and each chain node corresponds to one task to be processed; the task flow component is introduced by the technical means of processing the tasks to be processed in the task flow pipeline chain through the task flow component, the tasks to be processed can be directly processed through the task flow component, and the component is easier to multiplex, so that the technical problem that the code level design is difficult to directly multiplex in different service scenes is at least partially solved, and the technical effect of realizing the task processing in different service scenes without pertinently designing service codes is further achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which a task processing method may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow chart of a task processing method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a binary tree representation of a task _ reward package, in accordance with an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow diagram for generating a infix expression according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a diagram of a stack change for a logical operation with a suffix expression in accordance with an embodiment of the present disclosure;
FIG. 6 schematically illustrates an overall flow of a task flow component according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a block diagram of a task flow pipeline chain according to an embodiment of the present disclosure;
FIG. 8 schematically shows a block diagram of a task processing device according to an embodiment of the present disclosure; and
FIG. 9 schematically shows a block diagram of a computer system suitable for implementing a method of task processing according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Taking a break-through platform as a specific service scene as an example, in the scene, a user can receive rewards on the platform by executing tasks, and a plurality of tasks, a plurality of rewards and complex logic relations between the tasks and the rewards can be realized.
For the platform, the operation process comprises the following steps: and inquiring the task _ reward package information from the data table, then circulating each package, traversing the tasks and rewards in the package, and sequentially processing to obtain the result of whether the user executes the tasks and whether the user draws the rewards.
In the process of implementing the concept of the present disclosure, the inventors found that the operation mode of the platform is usually implemented in a hard coding manner, which results in high coupling and poor cohesion between service codes corresponding to the platform. Moreover, a component is not deposited in the hard coding manner, so that the logic code corresponding to the hard coding manner may be applicable in one scenario, but cannot be reused in other similar scenarios (e.g., scenarios with other logic relationships).
In the process of implementing the concept of the present disclosure, the inventors have also found that the hard coding manner cannot flexibly support parallel and serial processing between tasks. Moreover, the order of tasks needs to be stored separately as separate fields, and the overall implementation process is complex.
Embodiments of the present disclosure provide a task processing method, a task processing apparatus, a computer system, a computer-readable storage medium, and a computer program product. The method comprises the following steps: calling a task flow component; assembling at least one task to be processed into a task flow pipeline chain based on the task flow components, wherein the task flow pipeline chain comprises at least one chain node, and each chain node corresponds to one task to be processed; and processing the task to be processed in the task flow pipeline chain through the task flow component.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which a task processing method may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired and/or wireless communication links, and so forth.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a shopping application, a web browser application, a search application, an instant messaging tool, a mailbox client, and/or social platform software.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background management server that provides support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the task processing method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the task processing device provided by the embodiment of the present disclosure may be generally disposed in the server 105. The task processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the task processing device provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Alternatively, the task processing method provided by the embodiment of the present disclosure may also be executed by the terminal device 101, 102, or 103, or may also be executed by another terminal device different from the terminal device 101, 102, or 103. Accordingly, the task processing apparatus provided by the embodiment of the present disclosure may also be disposed in the terminal device 101, 102, or 103, or in another terminal device different from the terminal device 101, 102, or 103.
For example, the task to be processed may be originally stored in any one of the terminal devices 101, 102, or 103 (for example, but not limited to, the terminal device 101), or may be stored on an external storage device and may be imported into the terminal device 101. Then, the terminal device 101 may locally execute the task processing method provided by the embodiment of the present disclosure, or send the task to be processed to another terminal device, server, or server cluster, and execute the task processing method provided by the embodiment of the present disclosure by another terminal device, server, or server cluster that receives the task to be processed.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows a flow chart of a task processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S203.
In operation S201, a task flow component is called.
According to the embodiment of the present disclosure, the task flow component as a custom component (or may be referred to as a component or a control), for example, may exist in the form of a modular script, or may also exist in the form of an external function, for example, both the script component and the functional component may be used in a direct calling manner, or may also exist in the form of an executable file, and the task flow component may be loaded into a memory by running the executable file, or by performing relevant environment configuration based on the executable file. The task flow component can be invoked or loaded in a suitable manner to provide a basis for subsequent processing of tasks based on the task flow component.
In operation S202, at least one to-be-processed task is assembled into a task flow pipeline chain based on the task flow components, where the task flow pipeline chain includes at least one chain node, and each chain node corresponds to one to-be-processed task.
According to the embodiment of the disclosure, the task to be processed may be represented as a plurality of task events associated by a logical relationship, or may also be represented as a single task event without a logical relationship, and the plurality of task events without a logical association may be divided into different tasks to be processed. One link node in the task flow pipeline chain may correspond to one to-be-processed task formed by a plurality of task events associated with a logical relationship, or may correspond to one single task event, and the number of the link nodes may be determined by the number of the to-be-processed tasks. The task flow component may collect tasks to be processed (for example, multiple task events associated by a logical relationship or a single task event without a logical relationship), and the collected tasks to be processed are respectively corresponding to a chain node, and finally, a task flow pipeline chain is obtained by assembly, so as to provide a further implementation framework for a task processing process of the task flow component.
In operation S203, a task to be processed in the task flow pipeline chain is processed by the task flow component.
According to the embodiment of the disclosure, the hot outgoing tasks corresponding to each link point in the task flow pipeline chain can be processed through the task flow component, and particularly, under the condition that the tasks to be processed are a plurality of task events related by a logical relationship, the task flow component can realize the logical processing of the plurality of task events and output the processing result.
According to the embodiment of the present disclosure, in the case that the to-be-processed task is a plurality of task events related by a logical relationship, the code implementation of the to-be-processed task may generally include a code implementation of each task event and a code implementation of a logical relationship between the plurality of task events, or may include a logic code implementation having a corresponding logical relationship designed for the plurality of task events. Based on the above embodiment, the actual processing procedure of the task to be processed (for example, the actual processing procedure may include a logical processing procedure between multiple task events) may be completed by the task flow component, and thus, the task event and the actual processing procedure of the task to be processed may be divided into two independent parts to be implemented respectively, for example, only a code of each task event in the task to be processed may be written to implement, and the logical relationship between the multiple task events may be stored in a manner of a logical expression, and finally, the task flow component may implement the processing of the task to be processed by using the task flow component to process the logical expression.
Through the above embodiments of the present disclosure, a pipeline mode is adopted to implement a task flow component, through which a task processing process (implemented by the task flow component) can be isolated from specific task events (codes related to services), which is beneficial to implementing high cohesion and low coupling among service codes, so that on the basis of effectively ensuring that each task event or task to be processed can be independently constructed or changed, processing with various logical relationships among different task events can be efficiently implemented, and on this basis, the task flow component is particularly suitable for collecting and processing tasks to be processed, which are composed of task events with complex logical relationships. Meanwhile, the component has reusability, so the task flow component can be applied to any service scene with complex service logic relationship.
Taking the pass-through platform as an example, a task _ reward package is generated as the task to be processed according to the logical relationship between the task and the reward in the pass-through process, and the method shown in fig. 2 is further described with reference to fig. 3 to 7 in combination with a specific embodiment.
It should be noted that the method described in fig. 2 may directly process the task to be processed, and therefore, before executing the method described in fig. 2, the task to be processed must be obtained first, and in this embodiment, for example, a task _ reward package may be created first by a service developer, so that a subsequent task flow component may obtain the task _ reward package.
According to the embodiment of the disclosure, when creating the task _ bonus package, each task _ bonus package may be mapped to a binary tree, and the binary tree may reflect a logical relationship between task events (for example, may include tasks or bonus) and task events in the task _ bonus package, for example, may reflect a logical relationship between tasks and tasks, bonus and bonus, and a logical relationship between tasks and bonus, a precedence order between tasks, an exclusive relationship between bonus, which tasks can be completed to earn which bonus, and the like.
Fig. 3 schematically illustrates a binary tree representation of a task _ reward package according to an embodiment of the disclosure.
As shown in fig. 3, for example, a binary tree corresponding to a task _ reward package created for a service developer to analyze a task _ reward scheme that needs to be implemented in a certain pass-through platform is represented by: the method comprises the following steps of executing task 1 or task 2, executing task 3, executing task 4 or task 5, and getting reward 1 or reward 2, wherein the sequence of task 1 to task 3 needs to satisfy: after the task 1 or the task 2 is executed, the task 3 can be executed, and after the task 3 is executed, the task 4 or the task 5 can be executed, wherein the mode of getting the reward 1 and the reward 2 also needs to meet the following requirements: receiving prize 1 does not receive prize 2.
After the service developer completes the code implementation of each task event in the service scenario and determines the logic implementation among a plurality of task events, a complete service scenario with practical application significance can be completed by combining the operations S201 to S203.
According to the embodiments of the present disclosure, for example, after the implementation of the code of the task and the reward in the breakthrough type platform and the creation of the task _ reward package adapted to the requirement of the platform are completed, the task flow component may be introduced to implement the normal operation of the breakthrough type platform, and under the support of the task flow component, the user (or called consumer) may log in the platform to execute the corresponding breakthrough task and may obtain the corresponding reward after the breakthrough is successful.
Based on the above embodiment, in the case that the task to be processed is a task event with a logical relationship (i.e. can be mapped to a binary tree), the operation S203 includes operations S203-1 to S203-3.
In operation S203-1: and acquiring a infix expression corresponding to the task to be processed.
According to the embodiment of the present disclosure, the infix expression is saved when a service developer creates a task _ reward package, for example.
Fig. 4 schematically illustrates a flow diagram for generating a infix expression according to an embodiment of the disclosure.
As shown in fig. 4, the flow includes operations S401 to S403.
In operation S401, a task _ bonus package is created.
According to the embodiment of the present disclosure, the task _ reward package in this operation is still exemplified by the task _ reward package created for the aforementioned task _ reward scheme, and the service developer creates the task _ reward package for adapting to the service requirement (i.e., the task _ reward scheme), and the task _ reward package may be mapped to a binary tree (as shown in fig. 3).
In operation S402, a infix expression of the task _ bonus package is generated.
According to the embodiment of the disclosure, the infix expression can clearly represent the logical relationship between the tasks and the rewards in the task _ reward package, and in order to facilitate understanding and calculation of service developers, the binary tree infix expression is used for representing the logical relationship of the task _ reward package, as follows:
((task 1| task 2) & & (task 3& (task 4| | task 5))) & (reward 1| | reward 2),
the infix expression is a result of the binary tree's middle-order traversal, and when the binary tree is traversed in the middle-order, each node is separated by a space and a bracket () is added to each subtree, so that the infix expression corresponding to the task _ reward package can be generated.
In operation S403, a infix expression is stored.
According to the embodiment of the disclosure, the mapping relation between the task _ reward package and the corresponding infix expression is established, and the infix expression is stored in the database, so that the corresponding infix expression can be obtained according to the mapping relation when the task _ reward package (to-be-processed task) is subsequently processed.
In operation S203-2: and converting the infix expression into a suffix expression.
According to the embodiment of the disclosure, in order to facilitate understanding and calculation of service developers, tasks to be processed are stored in the form of affix expressions, but in order to facilitate operation of a computer, a suffix expression is required to be used for operation, and therefore after one affix expression is obtained, the affix expression needs to be converted into a suffix expression. The conversion process may be implemented by traversing a binary tree, for example, or may also be implemented by combining a last-in first-out queue (stack) through the following conversion steps:
a. the symbols, their priorities and their corresponding operations are defined as shown in table 1:
TABLE 1
b. And traversing each element of the suffix expression from left to right, if the symbol is the symbol, executing the operation corresponding to the symbol, and otherwise, adding the element to the suffix expression.
c. Elements in the stack are traversed, the stack is popped and added to the suffix expression until the stack is empty.
Through the operations a to c described above, the infix expression generated in the foregoing operation S402 may be converted into a suffix expression as follows, for example:
In operation S203-3: and operating the suffix expression to realize the processing of the task to be processed.
According to an embodiment of the present disclosure, the suffix expression may include, for example, a plurality of elements, each element being a logical operator or a task event for characterizing a logical relationship, and the operation S203-3 may include, for example: acquiring preset operation corresponding to a logical operator in the suffix expression; and traversing each element in the suffix expression in sequence and operating in combination with the first-in-first-out stack. The process of the sequential traversal may include, for example: directly stacking the elements under the condition that the elements are task events; and under the condition that the element is a logical operator, executing a preset operation corresponding to the logical operator.
According to the above embodiment of the present disclosure, taking the suffix expression obtained in operation S203-2 as an example, the logical operators for characterizing the logical relationship may include "|", "& &", and the task event may include task 1, task 2, task 3, bonus 1, bonus 2, in this embodiment, the suffix expression may be processed and operated by, for example, a first-in last-out queue (stack) through the following operations:
a. creating a NamedPredicate class
The NamedPredicate class implements the Predicate functional formula (java8 lambda expression API) interface. And has its own attribute name to identify each oracle.
b. Creating a stack
The type of storage element in the stack may be, for example, an instance of NamedPredicate, and may implement the test method of NamedPredicate.
c. Creating symbols, and corresponding operations, as shown in Table 2
TABLE 2
d. Traversing elements of suffix expressions
If the element is a symbol, the operation corresponding to the symbol is executed.
Otherwise, constructing NamedPredicate and stacking.
e. Do logical operations
FIG. 5 schematically illustrates a diagram of a stack change for a logical operation with a suffix expression according to an embodiment of the present disclosure.
As shown in fig. 5, the logical operation is, for example, a stack operation, and there is only one element in the final stack.
f. Stack top element unstacking and calculating
Through the stack operation shown in fig. 5, a stack top element is finally obtained to be popped and operated, and all the test methods of NamedPredicate are sequentially called to obtain a final result.
According to the above embodiments of the present disclosure, the final result may be, for example, the result of the execution of a certain task or tasks (e.g., including success or failure), or whether a reward can be picked up, or the category of rewards that can be picked up (e.g., including reward 1, reward 2).
According to the embodiment of the disclosure, the task events (such as the tasks and the rewards in the task _ reward package) can be realized in sequence only by one logic expression without storing the sequence of the tasks and the rewards in the task _ reward package in a single field, the complicated logic relationship is stored by applying the affix expression in the binary tree in the embodiment, and then the binary tree suffix expression is used for operation in combination with the function of the task flow component, so that the workload of developers is reduced.
According to an embodiment of the present disclosure, the to-be-processed task includes a plurality of target task events associated by target logical operators, and the operation S203 may further include: determining the execution modes of a plurality of target task events according to the target logical operator; under the condition that a target logic operator represents that the execution sequence of a plurality of target task events is not limited, determining the plurality of target task events as a plurality of task events which are executed in parallel; and under the condition that the execution sequence of the target logical operator for representing the target task events is limited, determining the target task events to be a plurality of task events executed in series.
According to a specific embodiment of the present disclosure, the target logical operator may be, for example, "! "," | ", or" & & ", where" | | "may, for example, represent that the execution sequence of the plurality of target task events connected thereto is not limited, and" & "may, for example, represent that the execution sequence of the plurality of target task events connected thereto is limited, the plurality of task events connected by" | ", may be decided as task events executable in parallel, and the plurality of task events connected by" & "may be decided as task events executable only in sequence (i.e., serial execution). Taking the infix expression corresponding to the binary tree shown in fig. 3 as an example, the execution sequence of task 1 and task 2 is not limited, task 1 and task 2 can be output in parallel and selected by the user to be executed, but to execute task 3, task 1 (task 2) must be executed first, i.e., task 1 (or task 2) and task 3 need to be output in series, and the former (task 1 or task 2) can be executed first to execute the latter (task 3).
It should be noted that, in the case of determining that the execution order of the target task events is not limited by the logical operator "|", for example, the determination may also be made by determining that the relationship among the task events is (whether there is an order limitation), if there are multiple tasks which do not have a logical relationship with each other but have a logical relationship with other tasks at the same time, the task to be processed may be represented as (task a, task b, task c) & (task d), and the task a, task b, and task c therein may be determined as tasks which can be output in parallel (or executed in parallel).
According to the embodiment of the disclosure, under the condition that the task to be processed comprises a plurality of task events which are executed in parallel, a thread pool is obtained; and simultaneously processing a plurality of task events executed in parallel through the thread pool.
According to a specific embodiment of the present disclosure, taking the task _ reward package as an example, in a case that there are multiple task events (tasks or rewards) that may or need to be computed in parallel (which may include output or execution), a thread pool is constructed, and results of all tasks and rewards are obtained in parallel (whether the tasks are completed or not and whether the rewards are already collected or not are determined).
According to the embodiment of the disclosure, under the condition that the task to be processed comprises a plurality of task events which are executed in series, the plurality of task events which are executed in series are processed respectively; and terminating the subsequent processing of the plurality of task events executed in series in the case where there is a task event execution failure.
According to an embodiment of the present disclosure, still taking the task _ reward package as an example, in the case that multiple task events (tasks or rewards) must be calculated in series (which may include output or execution), the results of the tasks and rewards are sequentially obtained for the elements (tasks or rewards) in the infix expression corresponding to the task _ reward package.
It should be noted that, whether parallel or serial, the final result of the calculation needs to go through the aforementioned operations S203-1 to S203-3.
By the embodiment of the disclosure, parallel and serial processing can be flexibly supported, and the method and the device can be suitable for processing task events existing in any mode in various scenes.
In summary of the above embodiments, fig. 6 schematically shows the overall flow of a task flow component according to an embodiment of the present disclosure.
As shown in fig. 6, operations S601 to S605 are included.
In operation S601, user information logged in to a platform is acquired.
In operation S602, a task _ bonus package that the user can select to execute is acquired according to the user information.
In operation S603, a task flow pipe chain is dynamically assembled.
According to an embodiment of the present disclosure, the operation may correspond to operation S202 described above, for example.
FIG. 7 schematically shows a block diagram of a task flow pipeline chain according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the operations S602 to S603 may be expressed, for example, by finding out the task _ bonus package and the infix expression corresponding to each task _ bonus package according to the user, and constructing a pipeline (i.e., the chain of task streams), where each handler in the pipeline (i.e., the chain node) corresponds to one task _ bonus package, i.e., corresponds to one binary tree, and the constructed pipeline is as shown in fig. 7.
In operation S604, a process is performed on each link point in the pipe chain.
According to an embodiment of the present disclosure, the operation may correspond to operation S203 described above, for example. In the operation, two processing modes of parallel and serial can be supported. When the method is applied to an actual scene, if the sequence of task events needs to be limited, a handler can be selected to be processed in a serial mode, which means that a subsequent task can be completed only when a previous task is completed; if the task events are out of sequence, a parallel mode can be selected to process the handler, the fact that the task sequence is not concerned is shown, and logic operation is carried out after the states of all the tasks are obtained.
In operation S605, a final result is returned.
According to the embodiment of the present disclosure, the final result can be obtained according to the processing result of operation S604.
By adopting the pipeline mode and the binary tree technology, the embodiment of the disclosure realizes high cohesion and low coupling among the service codes; the task flow component is realized through a pipeline mode, and the task flow component is suitable for various service scenes, and the component multiplexing is effectively realized; the elements in each chain node in the pipeline chain can be processed in parallel to improve the performance, and can also be processed in series to support short circuit, if the previous task is not completed, the subsequent task cannot be executed; the logical relationship between elements is stored by adopting a binary tree structure, the infix expression is used for storage, the suffix expression is used for calculation, and the method is simultaneously suitable for serial and parallel processing processes, so that the workload of developers is reduced.
Fig. 8 schematically shows a block diagram of a task processing device according to an embodiment of the present disclosure.
As shown in fig. 8, the task processing device 800 includes a calling module 810, an assembling module 820, and a processing module 830.
And a calling module 810 for calling the task flow component.
An assembling module 820 for assembling at least one task to be processed into a task flow pipeline chain based on the task flow components, wherein the task flow pipeline chain includes at least one chain node, and each chain node corresponds to one task to be processed.
And the processing module 830 is configured to process the to-be-processed task in the task flow pipeline chain through the task flow component.
According to the embodiment of the disclosure, the task to be processed includes a task event having a logical relationship, and the processing module includes a first obtaining submodule, a converting submodule, and an operation submodule.
And the first obtaining submodule is used for obtaining the infix expression corresponding to the task to be processed.
And the conversion submodule is used for converting the infix expression into a suffix expression.
And the operation submodule is used for operating the suffix expression so as to realize the processing of the task to be processed.
According to an embodiment of the disclosure, the task to be processed includes a plurality of target task events associated by target logical operators, and the processing module includes a first determining sub-module, a second determining sub-module, and a third determining sub-module.
And the first determining submodule is used for determining the execution modes of a plurality of target task events according to the target logical operator.
And the second determining sub-module is used for determining the target task events to be a plurality of task events executed in parallel under the condition that the target logical operator represents that the execution sequence of the target task events is not limited.
And the third determining sub-module is used for determining the target task events to be a plurality of task events executed in series under the condition that the execution sequence of the target logical operator representing the target task events is limited.
According to an embodiment of the present disclosure, the processing module includes a second obtaining sub-module and a first processing sub-module.
And the second obtaining submodule is used for obtaining the thread pool under the condition that the task to be processed comprises a plurality of task events which are executed in parallel.
And the first processing submodule is used for simultaneously processing a plurality of task events which are executed in parallel through the thread pool.
According to an embodiment of the present disclosure, the processing module includes a second processing submodule and a termination submodule.
And the second processing submodule is used for respectively processing the plurality of task events which are executed in series under the condition that the task to be processed comprises the plurality of task events which are executed in series.
And the termination submodule is used for terminating the subsequent processing of the plurality of task events executed in series under the condition that the task event execution fails.
According to an embodiment of the present disclosure, the suffix expression includes a plurality of elements, each element being a logical operator or a task event for characterizing a logical relationship, and the operation submodule includes an obtaining unit and an operation unit.
And the acquisition unit is used for acquiring the preset operation corresponding to the logical operator in the suffix expression.
And the operation unit is used for sequentially traversing each element in the suffix expression and performing operation by combining the first-in and the second-out stacks.
The arithmetic unit can also comprise a stack-entering subunit and an execution subunit.
The stacking subunit is used for directly stacking under the condition that the element is a task event;
and the execution subunit is used for executing the preset operation corresponding to the logical operator under the condition that the element is the logical operator.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the calling module 810, the assembling module 820, and the processing module 830 may be combined and implemented in one module/sub-module/unit/sub-unit, or any one of the modules/sub-modules/units/sub-units may be split into a plurality of modules/sub-modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/sub-modules/units/sub-units may be combined with at least part of the functionality of other modules/sub-modules/units/sub-units and implemented in one module/sub-module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the invoking module 810, the assembling module 820, and the processing module 830 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, at least one of the calling module 810, the assembling module 820, and the processing module 830 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
It should be noted that, the task processing device portion in the embodiment of the present disclosure corresponds to the task processing method portion in the embodiment of the present disclosure, and the description of the task processing device portion specifically refers to the task processing method portion, which is not described herein again.
FIG. 9 schematically shows a block diagram of a computer system suitable for implementing a method of task processing according to an embodiment of the present disclosure. The computer system illustrated in FIG. 9 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 9, a computer system 900 according to an embodiment of the present disclosure includes a processor 901 which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. Processor 901 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 901 may also include on-board memory for caching purposes. The processor 901 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the system 900 are stored. The processor 901, the ROM902, and the RAM 903 are connected to each other through a bus 904. The processor 901 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM902 and/or the RAM 903. Note that the programs may also be stored in one or more memories other than the ROM902 and the RAM 903. The processor 901 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The computer program, when executed by the processor 901, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to an embodiment of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM902 and/or the RAM 903 described above and/or one or more memories other than the ROM902 and the RAM 903.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method provided by the embodiments of the present disclosure, when the computer program product is run on an electronic device, the program code being configured to cause the electronic device to carry out the method of task processing provided by the embodiments of the present disclosure.
The computer program, when executed by the processor 901, performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, and downloaded and installed through the communication section 909 and/or installed from the removable medium 911. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.
Claims (10)
1. A method of task processing, comprising:
calling a task flow component;
assembling at least one task to be processed into a task flow pipeline chain based on the task flow components, wherein the task flow pipeline chain comprises at least one chain node, and each chain node corresponds to one task to be processed; and
and processing the tasks to be processed in the task flow pipeline chain through the task flow component.
2. The method of claim 1, wherein the pending tasks include task events having a logical relationship, processing the pending tasks in the task flow pipeline chain by the task flow component comprising:
acquiring a infix expression corresponding to the task to be processed;
converting the infix expression into a suffix expression; and
and operating the suffix expression to realize the processing of the task to be processed.
3. The method of claim 1, wherein the pending task comprises a plurality of target task events associated by target logical operators, and processing the pending task in the task flow pipeline chain by the task flow component comprises:
determining the execution modes of the target task events according to the target logical operator;
under the condition that the target logical operator represents that the execution sequence of the target task events is not limited, determining the target task events to be a plurality of task events executed in parallel; and
and under the condition that the execution sequence of the target logical operators for representing the target task events is limited, determining the target task events to be a plurality of task events executed in series.
4. The method of claim 1, wherein processing, by the task flow component, the task to be processed in the task flow pipeline chain comprises:
under the condition that the task to be processed comprises a plurality of task events which are executed in parallel, a thread pool is obtained; and
and simultaneously processing the plurality of task events executed in parallel through the thread pool.
5. The method of claim 1, wherein processing, by the task flow component, the task to be processed in the task flow pipeline chain comprises:
under the condition that the task to be processed comprises a plurality of task events which are executed in series, the plurality of task events which are executed in series are processed respectively; and
and in the case that the execution failure of the task event exists, terminating the subsequent processing of the plurality of task events executed in series.
6. The method of claim 2, wherein the suffix expression comprises a plurality of elements, each of the elements being a logical operator or the task event for characterizing the logical relationship, operating on the suffix expression comprising:
acquiring preset operation corresponding to a logical operator in the suffix expression; and
sequentially traversing each element in the suffix expression and operating in combination with a first-in-first-out stack, comprising:
directly stacking the elements under the condition that the elements are task events;
and executing a preset operation corresponding to the logical operator under the condition that the element is the logical operator.
7. A task processing device comprising:
the calling module is used for calling the task flow component;
the task flow assembly module is used for assembling at least one task to be processed into a task flow pipeline chain based on the task flow assembly, wherein the task flow pipeline chain comprises at least one chain node, and each chain node corresponds to one task to be processed; and
and the processing module is used for processing the tasks to be processed in the task flow pipeline chain through the task flow component.
8. A computer system, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
9. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 6.
10. A computer program product comprising computer executable instructions for implementing the method of any one of claims 1 to 6 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110085483.XA CN113760262A (en) | 2021-01-21 | 2021-01-21 | Task processing method, device, computer system and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110085483.XA CN113760262A (en) | 2021-01-21 | 2021-01-21 | Task processing method, device, computer system and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113760262A true CN113760262A (en) | 2021-12-07 |
Family
ID=78786377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110085483.XA Pending CN113760262A (en) | 2021-01-21 | 2021-01-21 | Task processing method, device, computer system and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113760262A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114217885A (en) * | 2021-12-17 | 2022-03-22 | 建信金融科技有限责任公司 | Data processing method, device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140237442A1 (en) * | 2013-02-20 | 2014-08-21 | Bank Of America Corporation | Decentralized workflow management system |
CN104794095A (en) * | 2014-01-16 | 2015-07-22 | 华为技术有限公司 | Distributed computation processing method and device |
CN110659131A (en) * | 2019-08-15 | 2020-01-07 | 中国平安人寿保险股份有限公司 | Task processing method, electronic device, computer device, and storage medium |
CN111435354A (en) * | 2019-01-14 | 2020-07-21 | 北京京东尚科信息技术有限公司 | Data export method and device, storage medium and electronic equipment |
CN111897572A (en) * | 2020-08-06 | 2020-11-06 | 杭州有赞科技有限公司 | Data processing method, system, computer equipment and readable storage medium |
-
2021
- 2021-01-21 CN CN202110085483.XA patent/CN113760262A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140237442A1 (en) * | 2013-02-20 | 2014-08-21 | Bank Of America Corporation | Decentralized workflow management system |
CN104794095A (en) * | 2014-01-16 | 2015-07-22 | 华为技术有限公司 | Distributed computation processing method and device |
CN111435354A (en) * | 2019-01-14 | 2020-07-21 | 北京京东尚科信息技术有限公司 | Data export method and device, storage medium and electronic equipment |
CN110659131A (en) * | 2019-08-15 | 2020-01-07 | 中国平安人寿保险股份有限公司 | Task processing method, electronic device, computer device, and storage medium |
CN111897572A (en) * | 2020-08-06 | 2020-11-06 | 杭州有赞科技有限公司 | Data processing method, system, computer equipment and readable storage medium |
Non-Patent Citations (1)
Title |
---|
王宁: "一种基于集群的通用并行计算框架设计", 现代计算机(专业版), no. 35, 31 December 2016 (2016-12-31) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114217885A (en) * | 2021-12-17 | 2022-03-22 | 建信金融科技有限责任公司 | Data processing method, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109284197B (en) | Distributed application platform based on intelligent contract and implementation method | |
CN111338623B (en) | Method, device, medium and electronic equipment for developing user interface | |
CN109783562B (en) | Service processing method and device | |
CN115982491A (en) | Page updating method and device, electronic equipment and computer readable storage medium | |
CN113434241A (en) | Page skipping method and device | |
CN113515271A (en) | Service code generation method and device, electronic equipment and readable storage medium | |
CN113419740A (en) | Program data stream analysis method and device, electronic device and readable storage medium | |
US8510707B1 (en) | Mainframe-based web service development accelerator | |
CN113761871A (en) | Rich text rendering method and device, electronic equipment and storage medium | |
CN110377273B (en) | Data processing method, device, medium and electronic equipment | |
CN111414154A (en) | Method and device for front-end development, electronic equipment and storage medium | |
CN114116509A (en) | Program analysis method, program analysis device, electronic device, and storage medium | |
US8555239B1 (en) | Mainframe-based web service development accelerator | |
CN113448570A (en) | Data processing method and device, electronic equipment and storage medium | |
CN113760262A (en) | Task processing method, device, computer system and computer readable storage medium | |
CN113176907A (en) | Interface data calling method and device, computer system and readable storage medium | |
CN112905273A (en) | Service calling method and device | |
CN111158777A (en) | Component calling method and device and computer readable storage medium | |
CN113535565B (en) | Interface use case generation method, device, equipment and medium | |
CN112860447B (en) | Interaction method and system between different applications | |
CN113392311A (en) | Field searching method, field searching device, electronic equipment and storage medium | |
CN113064987A (en) | Data processing method, apparatus, electronic device, medium, and program product | |
CN114844957B (en) | Link message conversion method, device, equipment, storage medium and program product | |
CN115037729B (en) | Data aggregation method, device, electronic equipment and computer readable medium | |
CN112445517B (en) | Inlet file generation method, device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |