CN114168347A - Information processing method, information processing apparatus, server, and storage medium - Google Patents

Information processing method, information processing apparatus, server, and storage medium Download PDF

Info

Publication number
CN114168347A
CN114168347A CN202111549891.2A CN202111549891A CN114168347A CN 114168347 A CN114168347 A CN 114168347A CN 202111549891 A CN202111549891 A CN 202111549891A CN 114168347 A CN114168347 A CN 114168347A
Authority
CN
China
Prior art keywords
node
task
target
flow
service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111549891.2A
Other languages
Chinese (zh)
Inventor
牛珍珠
张远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pingan Payment Technology Service Co Ltd
Original Assignee
Pingan Payment Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pingan Payment Technology Service Co Ltd filed Critical Pingan Payment Technology Service Co Ltd
Priority to CN202111549891.2A priority Critical patent/CN114168347A/en
Publication of CN114168347A publication Critical patent/CN114168347A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The application is applicable to the technical field of artificial intelligence, and provides an information processing method, an information processing device, a server and a storage medium, wherein the method comprises the following steps: respectively configuring each node included in the target process into task service according to a preset task configuration rule; configuring a target flow into a flow service according to a preset flow configuration rule; and when a flow calling request aiming at the flow service is received, running the task service corresponding to each node according to the execution condition of each node in the target flow so as to realize the execution of the target flow. According to the method and the system, independent management of each node is realized by independently packaging the implementation codes corresponding to each node, departments related to each node can be independently developed, developers are not required to be familiar with professional field knowledge of multiple nodes, the development difficulty can be reduced, and the development efficiency is improved.

Description

Information processing method, information processing apparatus, server, and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an information processing method, an information processing apparatus, a server, and a storage medium.
Background
With the development of cloud computing, big data, artificial intelligence, countries and enterprises are facing the challenge of digital transformation. The digital transformation comprises information digitization, process digitization and service digitization. At present, the process digitization realized by domestic enterprises only completes the process informatization in the first stage, converts the offline process into system operation, does not radically convert the process, and has larger distance from the digitization.
In the related technology, the dependence of the enterprise digital platform on professional and highly skilled developers is strong. Many times, a process involves many departments, and each node in the process involves the knowledge of the professional field of the corresponding department, which is a challenge for developers of a process digital platform to develop the platform and to be familiar with the professional field knowledge of each department involved in the node. Therefore, in the related art, all development work is concentrated on platform developers, and development efficiency is low due to high development difficulty.
Disclosure of Invention
In view of this, embodiments of the present application provide an information processing method, an information processing apparatus, a server, and a storage medium, so as to solve the problem in the related art that all development work is concentrated on platform developers, and development efficiency is not high enough due to a large development difficulty.
A first aspect of an embodiment of the present application provides an information processing method, including:
configuring each node included in the target process into task services respectively according to a preset task configuration rule, wherein the task services comprise links of implementation codes of corresponding nodes, the task services corresponding to the nodes are mutually independent, and the implementation codes corresponding to the nodes are independently packaged;
configuring the target process into process service according to a preset process configuration rule, wherein the process service comprises task service corresponding to each node and execution conditions of each node;
and when a flow calling request aiming at the flow service is received, running the task service corresponding to each node according to the execution condition of each node in the target flow so as to realize the execution of the target flow.
Further, according to the execution condition of each node in the target process, running the task service corresponding to each node, including:
traversing each node in the target flow, if the execution condition of the corresponding node is met currently, creating a task execution container aiming at the corresponding node, and running the implementation code of the corresponding node in the created task execution container to realize that the task service corresponding to the corresponding node is run in the created task execution container.
Further, traversing each node in the target flow includes:
and traversing each node in the target flow according to the execution sequence of each node in the target flow if each node in the target flow corresponds to the execution sequence.
Further, according to the execution condition of each node in the target process, running the task service corresponding to each node, further comprising:
and for each node, when detecting that the task service corresponding to the corresponding node is finished running, releasing the task execution container for the corresponding node and releasing related resources for running the corresponding task execution container.
Further, the method further comprises:
when a task calling request aiming at the target task service is received, a target container for running the target task service is created, and implementation codes pointed by links in the target task service are run in the target container, so that the target task service is run in the target container, wherein the target task service is the configured task service.
Further, the method further comprises:
and before the target container is released, if a re-task calling request aiming at the target task service is received, responding the re-task calling request based on the target task service operated by the target container.
Further, the method further comprises:
combining at least one node in the target process with other nodes to form a new process, and configuring the new process to be new process service according to a preset process configuration rule, wherein the new process service comprises task service corresponding to each node in the new process and execution conditions corresponding to each node in the new process.
A second aspect of an embodiment of the present application provides an information processing apparatus, including:
the task configuration unit is used for configuring each node included in the target process into task services respectively according to a preset task configuration rule, wherein the task services comprise links of implementation codes of corresponding nodes, the task services corresponding to the nodes are mutually independent, and the implementation codes corresponding to the nodes are independently packaged;
the flow configuration unit is used for configuring the target flow into flow service according to a preset flow configuration rule, wherein the flow service comprises task service corresponding to each node and execution conditions of each node;
and the flow executing unit is used for running the task service corresponding to each node according to the executing condition of each node in the target flow when receiving the flow calling request aiming at the flow service so as to realize the execution of the target flow.
Further, the flow executing unit is specifically configured to: traversing each node in the target flow, if the execution condition of the corresponding node is met currently, creating a task execution container aiming at the corresponding node, and running the implementation code of the corresponding node in the created task execution container to realize that the task service corresponding to the corresponding node is run in the created task execution container.
Further, in the process execution unit, traversing each node in the target process includes: and traversing each node in the target flow according to the execution sequence of each node in the target flow if each node in the target flow corresponds to the execution sequence.
Further, the flow executing unit is specifically further configured to: and for each node, when detecting that the task service corresponding to the corresponding node is finished running, releasing the task execution container for the corresponding node and releasing related resources for running the corresponding task execution container.
Further, the device also comprises a node execution unit. The node execution unit is used for creating a target container for running the target task service when receiving a task calling request aiming at the target task service, running implementation codes pointed by links in the target task service in the target container, and running the target task service in the target container, wherein the target task service is configured task service.
Further, the device also comprises a call response unit. The calling response unit is used for responding to the re-task calling request based on the target task service operated by the target container if the re-task calling request aiming at the target task service is received before the target container is released.
Further, the device also comprises a flow assembly unit. The flow assembling unit is used for combining at least one node in the target flow with other nodes to form a new flow, and configuring the new flow to a new flow service according to a preset flow configuration rule, wherein the new flow service comprises task services corresponding to the nodes in the new flow and execution conditions corresponding to the nodes in the new flow.
A third aspect of embodiments of the present application provides a server, which includes a memory, a processor, and a computer program stored in the memory and executable on the server, and when the processor executes the computer program, the processor implements the steps of the information processing method provided in the first aspect.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the information processing method provided by the first aspect.
The information processing method, the information processing device, the server and the storage medium provided by the embodiment of the application have the following beneficial effects: independent management of each node is realized by independently packaging the implementation codes corresponding to each node, departments related to each node can be independently developed, developers do not need to be familiar with professional field knowledge of multiple nodes, the development difficulty can be reduced, and the development efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments or the related technical descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of an information processing method provided in an embodiment of the present application;
FIG. 2 is a flowchart of another implementation of an information processing method provided in an embodiment of the present application;
FIG. 3 is a flowchart of an implementation of another information processing method provided in an embodiment of the present application;
fig. 4 is a block diagram of an information processing apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a server according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
In the embodiment of the application, the development efficiency of digitalizing the flow is improved based on an artificial intelligence technology.
The information processing method according to the embodiment of the present application may be executed by a server. When the information processing method is executed by a server, the execution subject is the server.
It should be noted that the server may include, but is not limited to, a server, a computer, a mobile phone, a tablet, a wearable smart device, and the like. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
Referring to fig. 1, fig. 1 shows a flowchart of an implementation of an information processing method according to an embodiment of the present application, including:
step 101, configuring each node included in the target process into a task service according to a preset task configuration rule.
The task services comprise links of implementation codes of corresponding nodes, the task services corresponding to the nodes are mutually independent, and the implementation codes corresponding to the nodes are independently packaged.
The preset task configuration rule is generally a preset rule for configuring a task. In practice, the task configuration rule is generally a rule for configuring a task under a knative framework. Wherein, the knative is open source software for realizing the non-service standardization.
The target process is usually a preset process, and the process usually includes a plurality of nodes. For example, if the target process is an approval process, the target process may include the following 3 nodes: the system comprises a first-level approval node, a second-level approval node and a third-level approval node.
The task service is generally a service for executing a task corresponding to a node. In practice, a service is typically a process.
Here, the main body of execution of the above-described information processing method is generally a control server. The control server may be various servers installed with kubernets and knetivey. The execution main body may configure each node included in the flow to be a task service according to the preset task configuration rule, so that one task service may be correspondingly obtained for each node. Kubernetes is an open source container orchestration engine that manages containers that are accessed on target servers. Kubernetes is recognized as a de facto industry standard for large-scale deployment of containerized applications in scenarios such as public, private, and hybrid clouds.
In practical application, each node is usually configured as a task service, the configured task service is usually a link, and when the link is called, the implementation code corresponding to the node is run to implement execution of the corresponding task service.
It should be noted that, the links stored in the control server only for the implementation codes corresponding to the nodes consume fewer storage resources than the links directly storing the implementation codes corresponding to the nodes, which is helpful for saving the storage resources of the control server.
And 102, configuring the target process into a process service according to a preset process configuration rule.
The flow service includes task service corresponding to each node and execution condition of each node.
The preset flow configuration rule is generally a preset rule for configuring a flow. In practice, the preset flow configuration rule is generally a rule for configuring a flow under a knative framework.
The flow service is generally a service for executing each node included in the flow. The execution condition is generally a preset condition for determining whether the corresponding node executes. For example, for the second-level approval node, the execution condition may be: and the first-stage approval result is that the approval is unqualified.
In practical applications, each flow is usually configured as a flow service, and the configured flow service includes task services of each node included in the flow and includes execution conditions of each node.
Here, the execution agent may configure the target flow as a flow service according to a flow configuration rule.
Step 103, when receiving the flow call request for the flow service, running the task service corresponding to each node according to the execution condition of each node in the target flow, so as to implement the execution of the target flow.
The flow call request is generally a request for calling a flow. In practice, the specific form of the flow invocation request may be "invocation flow 1", may also be "run No. 1", and may also be other forms.
Here, the execution agent may execute each node according to the execution condition of each node in the target flow after receiving the flow call request.
In practice, the execution subject may directly execute each node in parallel according to the execution condition of each node included in the target flow. The nodes may be executed in order according to the execution order of the nodes and the execution conditions of the nodes included in the target flow. The nodes may also be executed in parallel according to the execution order of the nodes and the execution conditions of the nodes included in the target flow.
As an example, if there are 4 nodes in the flow 1, which are respectively the node 1, the node 2, the node 3, and the node 4, and if the execution sequence of the 4 nodes is sequentially the node 1-the node 3-the node 4-the node 2, the execution main body may determine whether each node currently satisfies the corresponding execution condition, if so, start to execute the task service corresponding to the node, and if not, not start to execute the task service corresponding to the node.
In practice, the execution main body may configure the execution condition corresponding to each node in the target flow in advance, so that the target flow may be executed by using the preconfigured execution condition of each node. For example, if the target process is an approval process, the execution condition of a node may be configured such that the output value of the execution result of another node satisfies a preset condition, for example, the execution condition of node 2 may be configured as: if the approval result of the node 1 is excellent, the node 2 is not executed, and if the approval result of the node 1 is good, the node 2 is executed. It should be noted that, by configuring the execution conditions of each node in advance, flexible execution of each node in the control target flow can be realized, and the practicability is higher.
According to the method provided by the embodiment, independent management of each node is realized by independently packaging the implementation codes corresponding to each node, departments related to each node can be independently developed, developers do not need to be familiar with professional field knowledge of multiple nodes, the development difficulty can be reduced, and the development efficiency can be improved.
In some optional implementation manners of this embodiment, the running the task service corresponding to each node according to the execution condition of each node in the target flow may include: traversing each node in the target flow, if the execution condition of the corresponding node is met currently, creating a task execution container aiming at the corresponding node, and running the implementation code of the corresponding node in the created task execution container to realize that the task service corresponding to the corresponding node is run in the created task execution container.
Among them, the task execution container is generally a container for running a task service.
Here, the execution agent may traverse each node in the target flow. For example, each node may be traversed in order of the node's number from small to large. For each node, if the execution condition of the node is currently satisfied, the execution main body may create a task execution container for the node, and run the implementation code of the node in the created task execution container, thereby implementing that the task service corresponding to the node is run in the created task execution container.
It should be noted that, by creating a task execution container for each node, task services of different nodes can run in independent task execution containers, and task services corresponding to each node run independently. In addition, a task execution container is created for each node, so that independent operation among different processes can be realized, errors of other processes or nodes caused by errors of one process can be avoided, and the stability of data processing can be guaranteed.
In some optional implementations, traversing each node in the target flow may include: and traversing each node in the target flow according to the execution sequence of each node in the target flow if each node in the target flow corresponds to the execution sequence.
Each node in the target flow may be pre-assigned with an execution sequence. For example, if there are 4 nodes in the process 1, which are respectively node 1, node 2, node 3 and node 4, the execution sequence corresponding to node 1 may be the first execution, the execution sequence corresponding to node 3 may be the second execution, the execution sequence corresponding to node 2 may be the third execution, and the execution sequence corresponding to node 4 may be the fourth execution.
When each node in the target flow corresponds to an execution sequence, the execution main body may traverse each node in the target flow according to the execution sequence of each node in the target flow.
In some optional implementation manners, the running the task service corresponding to each node according to the execution condition of each node in the target flow further includes: and for each node, when detecting that the task service corresponding to the corresponding node is finished running, releasing the task execution container for the corresponding node and releasing related resources for running the corresponding task execution container.
Here, since each task service generally has an operation parameter for indicating an operation state of the task service when running, the execution main body may detect whether the corresponding task service is finished running by detecting a value of the operation parameter. If the value of the operation parameter is 1, indicating that the operation is in an operating state, and if the value of the operation parameter is 0, indicating that the operation is finished.
Here, for each node, when the task service running corresponding to the node is finished, the execution principal may release the task execution container for the node and release other resources related to the task execution container. And resources are released in time, and the resource utilization rate is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating an implementation of an information processing method according to an embodiment of the present disclosure. The information processing method provided by this embodiment may include the following steps:
step 201, configuring each node included in the target process into a task service according to a preset task configuration rule.
The task services comprise links of implementation codes of corresponding nodes, the task services corresponding to the nodes are mutually independent, and the implementation codes corresponding to the nodes are independently packaged.
Step 202, configuring the target process as a process service according to a preset process configuration rule.
The process service includes task service corresponding to each node and execution conditions of each node.
Step 203, when receiving the flow call request for the flow service, running the task service corresponding to each node according to the execution condition of each node in the target flow, so as to implement the execution of the target flow.
In the present embodiment, the specific operations of steps 201-203 are substantially the same as the operations of steps 101-103 in the embodiment shown in fig. 1, and are not repeated herein.
Step 204, when receiving a task call request for the target task service, creating a target container for running the target task service, and running an implementation code pointed by a link in the target task service in the target container, so as to run the target task service in the target container.
The target task service is configured task service. The target task service may be any task service that has been configured.
Wherein the target container is a container for running a target task service.
The task call request is generally a request for calling a task. In practice, the specific form of the task call request may be "call task 1", or may also be "run step 1", or may also be other forms, and the specific form of the task call request is not limited in this embodiment.
Here, the execution subject may create a target container when receiving the task call request, and then execute an implementation code corresponding to the target task service in the created target container, so that the target task service may be executed in the target container.
The embodiment can realize the calling execution of the task service corresponding to any configured node.
Referring to fig. 3, fig. 3 is a flowchart illustrating an implementation of an information processing method according to an embodiment of the present disclosure. The information processing method provided by this embodiment may include the following steps:
step 301, configuring each node included in the target process into a task service according to a preset task configuration rule.
The task services comprise links of implementation codes of corresponding nodes, the task services corresponding to the nodes are mutually independent, and the implementation codes corresponding to the nodes are independently packaged.
Step 302, configuring the target process as a process service according to a preset process configuration rule.
The process service includes task service corresponding to each node and execution conditions of each node.
Step 303, when receiving a flow call request for a flow service, running a task service corresponding to each node according to an execution condition of each node in the target flow, so as to implement execution of the target flow.
And step 304, when a task call request aiming at the target task service is received, creating a target container for running the target task service, and running the implementation code pointed by the link in the target task service in the target container to realize that the target task service is run in the target container.
The target task service is configured task service.
In the present embodiment, the specific operations of the steps 301-.
Step 305, before the target container is released, if a re-task calling request for the target task service is received, responding to the re-task calling request based on the target task service operated by the target container.
The re-task calling request is generally a request for re-requesting to call the target task service.
Here, when the execution agent receives a re-task call request for the target task service before the target container is released, the execution agent does not create a new container based on the re-task call request, and responds to the re-task call request by directly using the created target container and the target task service running in the target container.
In this embodiment, the target task service running in the target container may be invoked by multiple processes at the same time, or may be invoked by multiple users separately, and it is not necessary to create a corresponding container for each invocation, which is highly practical. When the calls in the same time are all finished, the target container and the related resources for operating the target container are released, which is beneficial to further improving the resource utilization rate.
In an optional implementation manner of each embodiment of the present application, the information processing method may further include the following steps: combining at least one node in the target process with other nodes to form a new process, and configuring the new process to be a new process service according to a preset process configuration rule.
The new process service comprises task services corresponding to each node in the new process and execution conditions corresponding to each node in the new process.
Here, the execution main body may apply the configured node in the target flow to another new flow, may reduce the configuration number of the task services, and may implement repeated and effective utilization of the task services corresponding to the configured node, which is beneficial to improving the resource utilization rate and the information processing efficiency.
Referring to fig. 4, fig. 4 is a block diagram of an information processing apparatus 400 according to an embodiment of the present disclosure. The information processing apparatus in this embodiment includes units for executing the steps in the embodiments corresponding to fig. 1 to 3. Please refer to fig. 1-3 and the related descriptions of the embodiments corresponding to fig. 1-3. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 4, the information processing apparatus 400 includes:
the task configuration unit 401 is configured to configure each node included in the target flow into a task service according to a preset task configuration rule, where the task service includes links of implementation codes of corresponding nodes, the task services corresponding to each node are independent of each other, and the implementation codes corresponding to each node are independently encapsulated;
a flow configuration unit 402, configured to configure a target flow as a flow service according to a preset flow configuration rule, where the flow service includes task services corresponding to nodes and execution conditions of the nodes;
the flow executing unit 403 is configured to, when receiving a flow call request for a flow service, run a task service corresponding to each node according to an execution condition of each node in the target flow, so as to implement execution of the target flow.
Here, the execution main body may configure each node included in the flow to be a task service according to the preset task configuration rule, so that one task service may be correspondingly obtained for each node. In practical application, each node is usually configured as a task service, the configured task service is usually a link, and when the link is called, the implementation code corresponding to the node is run to implement execution of the corresponding task service. It should be noted that, the links stored in the control server only for the implementation codes corresponding to the nodes consume fewer storage resources than the links directly storing the implementation codes corresponding to the nodes, which is helpful for saving the storage resources of the control server.
Then, the execution agent may configure the target process as a process service according to a process configuration rule.
Thereafter, the execution agent may execute each node according to the execution condition of each node in the target flow after receiving the flow call request. In practice, the execution subject may directly execute each node in parallel according to the execution condition of each node included in the target flow. The nodes may be executed in order according to the execution order of the nodes and the execution conditions of the nodes included in the target flow. The nodes may also be executed in parallel according to the execution order of the nodes and the execution conditions of the nodes included in the target flow.
As an example, if there are 4 nodes in the flow 1, which are respectively the node 1, the node 2, the node 3, and the node 4, and if the execution sequence of the 4 nodes is sequentially the node 1-the node 3-the node 4-the node 2, the execution main body may determine whether each node currently satisfies the corresponding execution condition, if so, start to execute the task service corresponding to the node, and if not, not start to execute the task service corresponding to the node.
In practice, the execution main body may configure the execution condition corresponding to each node in the target flow in advance, so that the target flow may be executed by using the preconfigured execution condition of each node. For example, if the target process is an approval process, the execution condition of a node may be configured such that the output value of the execution result of another node satisfies a preset condition, for example, the execution condition of node 2 may be configured as: if the approval result of the node 1 is excellent, the node 2 is not executed, and if the approval result of the node 1 is good, the node 2 is executed. It should be noted that, by configuring the execution conditions of each node in advance, flexible execution of each node in the control target flow can be realized, and the practicability is higher.
The device provided by the embodiment realizes independent management of each node by independently packaging the implementation codes corresponding to each node, and departments related to each node can be independently developed without the need of developers to be familiar with professional field knowledge of multiple nodes, so that the development difficulty can be reduced, and the development efficiency can be improved.
As an embodiment of the present application, the flow executing unit 403 is specifically configured to: traversing each node in the target flow, if the execution condition of the corresponding node is met currently, creating a task execution container aiming at the corresponding node, and running the implementation code of the corresponding node in the created task execution container to realize that the task service corresponding to the corresponding node is run in the created task execution container.
Among them, the task execution container is generally a container for running a task service.
Here, the execution agent may traverse each node in the target flow. For example, each node may be traversed in order of the node's number from small to large. For each node, if the execution condition of the node is currently satisfied, the execution main body may create a task execution container for the node, and run the implementation code of the node in the created task execution container, thereby implementing that the task service corresponding to the node is run in the created task execution container.
It should be noted that, by creating a task execution container for each node, task services of different nodes can run in independent task execution containers, and task services corresponding to each node run independently. In addition, a task execution container is created for each node, so that independent operation among different processes can be realized, errors of other processes or nodes caused by errors of one process can be avoided, and the stability of data processing can be guaranteed.
As an embodiment of the present application, in the process executing unit 403, traversing each node in the target process includes: and traversing each node in the target flow according to the execution sequence of each node in the target flow if each node in the target flow corresponds to the execution sequence.
Each node in the target flow may be pre-assigned with an execution sequence. For example, if there are 4 nodes in the process 1, which are respectively node 1, node 2, node 3 and node 4, the execution sequence corresponding to node 1 may be the first execution, the execution sequence corresponding to node 3 may be the second execution, the execution sequence corresponding to node 2 may be the third execution, and the execution sequence corresponding to node 4 may be the fourth execution.
Here, when the nodes in the target flow correspond to execution orders, the execution agent may traverse the nodes in the target flow according to the execution orders of the nodes in the target flow.
As an embodiment of the present application, the flow executing unit 403 is further specifically configured to: and for each node, when detecting that the task service corresponding to the corresponding node is finished running, releasing the task execution container for the corresponding node and releasing related resources for running the corresponding task execution container.
Here, since each task service generally has an operation parameter for indicating an operation state of the task service when running, the execution main body may detect whether the corresponding task service is finished running by detecting a value of the operation parameter. If the value of the operation parameter is 1, indicating that the operation is in an operating state, and if the value of the operation parameter is 0, indicating that the operation is finished.
Here, for each node, when the task service running corresponding to the node is finished, the execution principal may release the task execution container for the node and release other resources related to the task execution container. And resources are released in time, and the resource utilization rate is improved.
As an embodiment of the present application, the apparatus further includes a node execution unit (not shown in the figure). The node execution unit is used for creating a target container for running the target task service when receiving a task calling request aiming at the target task service, running an implementation code pointed by a link in the target task service in the target container, and running the target task service in the target container.
And the target task service is configured task service. The target task service may be any task service that has been configured.
Wherein the target container is a container for running a target task service.
The task call request is generally a request for calling a task. In practice, the specific form of the task call request may be "call task 1", or may also be "run step 1", or may also be other forms, and the specific form of the task call request is not limited in this embodiment.
Here, the execution subject may create a target container when receiving the task call request, and then execute an implementation code corresponding to the target task service in the created target container, so that the target task service may be executed in the target container.
The embodiment can realize the calling execution of the task service corresponding to any configured node.
As an embodiment of the present application, the apparatus further includes a call response unit (not shown in the figure). The calling response unit is used for responding to the re-task calling request based on the target task service operated by the target container if the re-task calling request aiming at the target task service is received before the target container is released.
The re-task calling request is generally a request for re-requesting to call the target task service.
Here, when the execution agent receives a re-task call request for the target task service before the target container is released, the execution agent does not create a new container based on the re-task call request, and responds to the re-task call request by directly using the created target container and the target task service running in the target container.
In this embodiment, the target task service running in the target container may be invoked by multiple processes at the same time, or may be invoked by multiple users separately, and it is not necessary to create a corresponding container for each invocation, which is highly practical. When the calls in the same time are all finished, the target container and the related resources for operating the target container are released, which is beneficial to further improving the resource utilization rate.
As an embodiment of the present application, the apparatus further includes a process assembling unit (not shown in the figure). The flow assembling unit is used for combining at least one node in the target flow with other nodes to form a new flow, and configuring the new flow to a new flow service according to a preset flow configuration rule, wherein the new flow service comprises task services corresponding to the nodes in the new flow and execution conditions corresponding to the nodes in the new flow.
The new process service comprises task services corresponding to each node in the new process and execution conditions corresponding to each node in the new process.
Here, the execution main body may apply the configured node in the target flow to another new flow, may reduce the configuration number of the task services, and may implement repeated and effective utilization of the task services corresponding to the configured node, which is beneficial to improving the resource utilization rate and the information processing efficiency.
The device provided by the embodiment realizes independent management of each node by independently packaging the implementation codes corresponding to each node, and departments related to each node can be independently developed without the need of developers to be familiar with professional field knowledge of multiple nodes, so that the development difficulty can be reduced, and the development efficiency can be improved.
It should be understood that, in the structural block diagram of the information processing apparatus shown in fig. 4, each unit is configured to execute each step in the embodiment corresponding to fig. 1 to fig. 3, and each step in the embodiment corresponding to fig. 1 to fig. 3 has been explained in detail in the above embodiment, and specific reference is made to the relevant description in the embodiment corresponding to fig. 1 to fig. 3 and fig. 1 to fig. 3, which is not described herein again.
Fig. 5 is a block diagram of a server according to another embodiment of the present application. As shown in fig. 5, the server 500 of this embodiment includes: a processor 501, a memory 502 and a computer program 503, such as a program of an information processing method, stored in the memory 502 and executable on the processor 501. The processor 501 executes the computer program 503 to implement the steps in each embodiment of the information processing method described above, such as the steps 101 to 103 shown in fig. 1. Alternatively, when the processor 501 executes the computer program 503, the functions of the units in the embodiment corresponding to fig. 4, for example, the functions of the units 401 to 403 shown in fig. 4, are implemented, for which reference is specifically made to the relevant description in the embodiment corresponding to fig. 4, which is not repeated herein.
Illustratively, the computer program 503 may be divided into one or more units, which are stored in the memory 502 and executed by the processor 501 to accomplish the present application. One or more elements may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of computer program 503 in server 500. For example, the computer program 503 may be divided into a task configuration unit, a flow configuration unit, and a flow execution unit, and the specific functions of each unit are as described above.
The server may include, but is not limited to, a processor 501, a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of a server 500, and does not constitute a limitation on server 500, and may include more or fewer components than shown, or some components in combination, or different components, e.g., a turntable device may also include input output devices, network access devices, buses, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 502 may be an internal storage unit of the server 500, such as a hard disk or a memory of the server 500. The memory 502 may also be an external storage device of the server 500, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the server 500. Further, memory 502 may also include both internal storage units of server 500 and external storage devices. The memory 502 is used for storing computer programs and other programs and data required by the turntable device. The memory 502 may also be used to temporarily store data that has been output or is to be output.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated module, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable storage medium may be non-volatile or volatile. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable storage media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An information processing method, characterized in that the method comprises:
configuring each node included in a target process into a task service respectively according to a preset task configuration rule, wherein the task service comprises links of implementation codes of corresponding nodes, the task services corresponding to the nodes are independent, and the implementation codes corresponding to the nodes are independently packaged;
configuring the target process into process service according to a preset process configuration rule, wherein the process service comprises task service corresponding to each node and execution conditions of each node;
and when a flow calling request aiming at the flow service is received, running the task service corresponding to each node according to the execution condition of each node in the target flow so as to realize the execution of the target flow.
2. The information processing method according to claim 1, wherein the running of the task service corresponding to each node according to the execution condition of each node in the target process includes:
traversing each node in the target flow, if the execution condition of the corresponding node is met currently, creating a task execution container aiming at the corresponding node, and running the implementation code of the corresponding node in the created task execution container to realize that the task service corresponding to the corresponding node is run in the created task execution container.
3. The information processing method according to claim 2, wherein the traversing each node in the target flow includes:
and traversing each node in the target flow according to the execution sequence of each node in the target flow if each node in the target flow corresponds to the execution sequence.
4. The information processing method according to claim 3, wherein the running of the task service corresponding to each node according to the execution condition of each node in the target flow further comprises:
and for each node, when detecting that the task service corresponding to the corresponding node is finished running, releasing the task execution container for the corresponding node and releasing related resources for running the corresponding task execution container.
5. The information processing method according to claim 1, characterized by further comprising:
when a task call request aiming at a target task service is received, a target container for running the target task service is created, an implementation code pointed by a link in the target task service is run in the target container, and the target task service is run in the target container, wherein the target task service is configured.
6. The information processing method according to claim 5, characterized by further comprising:
before the target container is released, if a re-task calling request aiming at the target task service is received, responding the re-task calling request based on the target task service operated by the target container.
7. The information processing method according to any one of claims 1 to 6, characterized by further comprising:
combining at least one node in the target process with other nodes to form a new process, and configuring the new process to be a new process service according to the preset process configuration rule, wherein the new process service comprises task services corresponding to the nodes in the new process and execution conditions corresponding to the nodes in the new process.
8. An information processing apparatus characterized in that the apparatus comprises:
the task configuration unit is used for configuring each node included in the target process into task services respectively according to a preset task configuration rule, wherein the task services include links of implementation codes of corresponding nodes, the task services corresponding to the nodes are mutually independent, and the implementation codes corresponding to the nodes are independently packaged;
the flow configuration unit is used for configuring the target flow into flow service according to a preset flow configuration rule, wherein the flow service comprises task service corresponding to each node and execution conditions of each node;
and the flow executing unit is used for running the task service corresponding to each node according to the executing condition of each node in the target flow when receiving the flow calling request aiming at the flow service so as to realize the execution of the target flow.
9. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202111549891.2A 2021-12-17 2021-12-17 Information processing method, information processing apparatus, server, and storage medium Pending CN114168347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111549891.2A CN114168347A (en) 2021-12-17 2021-12-17 Information processing method, information processing apparatus, server, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111549891.2A CN114168347A (en) 2021-12-17 2021-12-17 Information processing method, information processing apparatus, server, and storage medium

Publications (1)

Publication Number Publication Date
CN114168347A true CN114168347A (en) 2022-03-11

Family

ID=80487113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111549891.2A Pending CN114168347A (en) 2021-12-17 2021-12-17 Information processing method, information processing apparatus, server, and storage medium

Country Status (1)

Country Link
CN (1) CN114168347A (en)

Similar Documents

Publication Publication Date Title
CN110058864B (en) Micro-service deployment method and device
CN111899008B (en) Resource transfer method, device, equipment and system
US11196633B2 (en) Generalized correlation of network resources and associated data records in dynamic network environments
US10834059B2 (en) Secure message handling of an application across deployment locations
CN110781180B (en) Data screening method and data screening device
CN112787999B (en) Cross-chain calling method, device, system and computer readable storage medium
CN112035344A (en) Multi-scenario test method, device, equipment and computer readable storage medium
US20240289450A1 (en) Automated threat modeling using application relationships
CN116502283A (en) Privacy data processing method and device
CN110457132B (en) Method and device for creating functional object and terminal equipment
CN114581241A (en) Intelligent contract processing method and device, processor and electronic equipment
CN111338716A (en) Data processing method and device based on rule engine and terminal equipment
US11068487B2 (en) Event-stream searching using compiled rule patterns
CN114116509A (en) Program analysis method, program analysis device, electronic device, and storage medium
CN110598419A (en) Block chain client vulnerability mining method, device, equipment and storage medium
CN114115884B (en) Method and related device for managing programming service
KR20150133902A (en) System and method for developing of service based on software product line
CN114168347A (en) Information processing method, information processing apparatus, server, and storage medium
US20220405104A1 (en) Cross platform and platform agnostic accelerator remoting service
US20220122038A1 (en) Process Version Control for Business Process Management
US10990887B1 (en) Anything-but matching using finite-state machines
CN114064678A (en) Event data processing method and device and terminal equipment
CN116719627B (en) Data processing method, device, equipment and storage medium
CN117056317B (en) Data processing method, device, equipment and computer readable storage medium
CN115562641B (en) Data processing method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination