CN112766907A - Service data processing method and device and server - Google Patents

Service data processing method and device and server Download PDF

Info

Publication number
CN112766907A
CN112766907A CN202110074027.5A CN202110074027A CN112766907A CN 112766907 A CN112766907 A CN 112766907A CN 202110074027 A CN202110074027 A CN 202110074027A CN 112766907 A CN112766907 A CN 112766907A
Authority
CN
China
Prior art keywords
processing
target
level
service data
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110074027.5A
Other languages
Chinese (zh)
Inventor
宋府昌
黎明鸣
刘垚
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110074027.5A priority Critical patent/CN112766907A/en
Publication of CN112766907A publication Critical patent/CN112766907A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0609Buyer or seller confidence or verification

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The specification provides a service data processing method, a service data processing device and a server. Based on the method, before the business data is processed, a plurality of corresponding processing flow directed graphs can be constructed based on a plurality of template processing flows; then, performing preset breadth-first traversal processing and preset node level marking processing on the processing flow directed graphs respectively to obtain a preset node level marking list storing level marks of processing nodes in each template processing flow; when processing the service data, a target processing flow matched with the target service data can be determined; determining the level marks of all processing nodes contained in the target processing flow by inquiring a preset node level mark list; and then, according to the level marks of the processing nodes, calling the processing nodes of the same level by level to process the target service data in parallel, so that the processing complexity of the target service data can be reduced, and the processing efficiency of the target service data can be improved.

Description

Service data processing method and device and server
Technical Field
The present specification belongs to the technical field of big data processing, and in particular, to a method, an apparatus, and a server for processing service data.
Background
In a big data processing scene, for service data accessed by a platform, such as a credit approval request to be processed, the platform often needs to analyze and process a dependency relationship in the service data process; and then according to the dependency relationship, sequentially calling a plurality of different processing nodes to sequentially perform serial processing to complete the processing of the service data and obtain a final processing result.
Therefore, when the service data is processed based on the existing method, the technical problems of complex processing process and low processing efficiency often exist.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The present specification provides a method, an apparatus, and a server for processing service data, so as to reduce the processing complexity of target service data and improve the processing efficiency of target service data.
The present specification provides a method for processing service data, including:
acquiring target service data to be processed;
determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes;
inquiring a preset node level mark list, and determining the level mark of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance;
and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
In one embodiment, invoking a plurality of processing nodes of each hierarchy level by level according to the hierarchical labels of the processing nodes in the target processing flow to process the target business data in parallel comprises:
calling a plurality of processing nodes of the current level to process the target service data in parallel according to the following modes:
determining a hierarchy mark of a current hierarchy, and determining a processing node of the hierarchy mark as the hierarchy mark of the current hierarchy as a processing node of the current hierarchy from the target processing flow;
counting the number of the processing nodes of the current level;
obtaining a plurality of target service data through data backup according to the node number of the processing node of the current level; wherein the number of the plurality of target service data is equal to the number of nodes of the processing node of the current hierarchy;
respectively sending the target service data to a plurality of processing nodes of the current level;
and calling the processing nodes of the plurality of current levels, and processing the received target service data in parallel.
In one embodiment, after invoking the plurality of processing nodes of the current hierarchy to process the received target service data in parallel, the method further comprises:
receiving a node processing result fed back by the processing node of the current level for processing the received target service data;
inquiring preset weight configuration parameters, and determining a preset weight corresponding to the processing node of the current level;
and counting to obtain the processing result of the current level of the target data according to the node processing result fed back by the processing node of the current level and the corresponding preset weight.
In one embodiment, after counting the processing results regarding the current hierarchy of the target data, the method further comprises:
comparing the processing result of the current level with a preset reference result threshold value to obtain a corresponding comparison result;
determining whether to call a processing node of a next level of the current level to process the target service data in parallel according to the comparison result;
and under the condition that the processing node of the next level of the current level is determined not to be called to process the target service data in parallel according to the comparison result, finishing the data processing of the target service data.
In one embodiment, the preset node hierarchy marker list is established as follows:
acquiring a plurality of template processing flows related to business data processing;
constructing a plurality of flow directed graphs according to the plurality of template processing flows; the flow directed graph comprises a plurality of associated processing nodes, and the processing nodes with the bearing relation are connected through directed edges;
according to a preset processing rule, respectively performing preset breadth-first traversal on the plurality of processing flow directed graphs so as to determine processing nodes belonging to the same level in each processing flow directed graph;
according to a preset marking rule, corresponding level marks are set for processing nodes belonging to the same level in each processing flow directed graph;
and storing the hierarchical marks of the processing nodes in each template processing flow to obtain the preset node hierarchical mark list.
In one embodiment, according to a preset processing rule, performing preset breadth-first traversal on the plurality of process flow directed graphs respectively to determine processing nodes belonging to the same level in each process flow directed graph, including:
determining the processing nodes belonging to the current level in the current processing flow directed graph according to the following modes:
determining a processing node of a previous level from the current processing flow directed graph according to a level mark of the previous level of the current level;
detecting whether a processing node of the previous level has a directed edge from the processing node;
and under the condition that the existence of the directed edge starting from the processing node of the previous hierarchy is detected, determining the processing node pointed by the directed edge as the processing node of the current hierarchy.
In one embodiment, obtaining a plurality of template processing flows related to business data processing comprises:
acquiring a historical service data processing record;
extracting a plurality of processing flows of the historical service data according to the historical service data processing records;
and clustering the processing flows of the historical service data to obtain a plurality of template processing flows related to service data processing.
In one embodiment, after obtaining the preset node hierarchy marker list, the method further comprises:
determining the logical relationship among processing nodes belonging to the same level in the processing flow directed graph according to the template processing flow;
determining the association degree between processing nodes belonging to the same level and target service data processing according to the template processing flow;
configuring corresponding preset weights for the processing nodes belonging to the same level according to the logical relationship between the processing nodes belonging to the same level and the association degree of the processing nodes and the target service data processing;
and storing the preset weight of the processing node in the template processing flow to obtain a preset weight configuration parameter.
In one embodiment, the target traffic data includes a credit evaluation request to be audited.
In one embodiment, the target service data further carries identity information of a target user and service information of a credit service for which the credit evaluation request to be audited is directed;
correspondingly, the step of determining the template processing flow matched with the target service data comprises the following steps:
inquiring a user database according to the identity information of the target user to determine a user tag of the target user;
determining the service type of the credit service applied by the target user, the data value of the service data and the returning mode of the service data according to the service information of the credit service;
and determining a matched target processing flow from the plurality of template processing flows according to the user label of the target user, the service type of the credit service applied by the target user, the data value of the service data and the returning mode of the service data.
In one embodiment, after invoking multiple processing nodes of the same hierarchy level in a layer-by-layer hierarchy to process the target traffic data in parallel, the method further comprises:
acquiring a processing result of each level in a target processing flow;
determining a credit evaluation result of the target user according to the processing result of each level;
and determining whether to send the service data of the credit service applied to the target user or not according to the credit evaluation result of the target user.
This specification also provides a service data processing apparatus, including:
the acquisition module is used for acquiring target service data to be processed;
the determining module is used for determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes;
the query module is used for querying a preset node hierarchy mark list and determining the hierarchy mark of the processing node in the target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance;
and the processing module is used for calling a plurality of processing nodes of the same level by level to process the target service data in parallel according to the level marks of the processing nodes in the target processing flow.
The present specification also provides another service data processing method, including:
acquiring target service data to be processed;
determining a target processing flow related to the target business data; wherein the target processing flow comprises a plurality of associated processing nodes;
constructing a corresponding processing flow directed graph according to the target processing flow;
performing preset breadth-first traversal processing and preset node level marking processing on the processing flow directed graph to obtain a level mark of a processing node in a target processing flow;
and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
The present specification also provides a server, including a processor and a memory for storing processor-executable instructions, where the processor executes the instructions to obtain target service data to be processed; determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes; inquiring a preset node level mark list, and determining the level mark of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance; and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
The present specification also provides a computer readable storage medium having stored thereon computer instructions that, when executed, implement obtaining target business data to be processed; determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes; inquiring a preset node level mark list, and determining the level mark of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance; and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
The present specification provides a method, an apparatus, and a server for processing business data, wherein before processing the business data, a plurality of corresponding processing flow directed graphs may be constructed based on a plurality of template processing flows; then, performing preset breadth-first traversal processing and preset node level marking processing on the processing flow directed graphs respectively to obtain a preset node level marking list storing level marks of processing nodes in each template processing flow; when processing the business data, a template processing flow matched with the target business data can be determined as a target processing flow; determining the level marks of all processing nodes contained in the target processing flow by inquiring a preset node level mark list; and then, according to the level marks of the processing nodes, calling the processing nodes of the same level by level to process the target service data in parallel, so that the processing complexity of the target service data can be reduced, the processing efficiency of the target service data is improved, and the technical problems of complicated processing process and low processing efficiency existing in the conventional method for processing the service data are solved.
Drawings
In order to more clearly illustrate the embodiments of the present specification, the drawings needed to be used in the embodiments will be briefly described below, and the drawings in the following description are only some of the embodiments described in the present specification, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic diagram of an embodiment of a system structure composition to which a method for processing service data provided by an embodiment of the present specification is applied;
fig. 2 is a flowchart illustrating a method for processing service data according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of an embodiment of a method for processing service data provided by an embodiment of the present specification, in an example scenario;
fig. 4 is a flowchart illustrating a method for processing service data according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a server according to an embodiment of the present disclosure;
fig. 6 is a schematic structural composition diagram of a service data processing device provided in an embodiment of the present specification;
fig. 7 is a schematic diagram of an embodiment of a method for processing service data provided by an embodiment of the present specification, in a scenario example;
fig. 8 is a schematic diagram of an embodiment of a method for processing service data provided by an embodiment of the present specification, in a scenario example;
fig. 9 is a schematic diagram of an embodiment of a method for processing service data provided by an embodiment of the present specification, in an example scenario.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
Considering the existing processing method based on the service data, it is often necessary to analyze the dependency relationship in the processing process, and then invoke a plurality of different processing nodes according to the dependency relationship and the corresponding sequence to process the service data in sequence. Therefore, the processing process of the service data is more complicated, and meanwhile, the service data is difficult to process in a parallel mode, so that the overall processing efficiency of the service data is lower.
For the root cause of the above problems, the present specification considers that corresponding processing flow directed graphs may be respectively constructed for a plurality of template processing flows; and respectively performing preset breadth-first traversal processing and preset node level marking processing on the processing flow directed graph, determining and marking the level marks of the processing nodes in each template processing flow, and obtaining a preset node level mark list in which the level marks of the processing nodes in each template processing flow are stored. Furthermore, when the target service data is specifically processed, a template processing flow matched with the target service data can be determined as a target processing flow, and a preset node level mark list is inquired to determine the level mark of each processing node contained in the target processing flow; and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes to process the target service data in parallel. Therefore, the dependency relationship does not need to be analyzed and considered in the process of processing the target service data, and meanwhile, the target service data can be simultaneously processed by calling a plurality of processing nodes belonging to the same level in a parallel mode, so that the processing complexity of the target service data can be effectively reduced, and the processing efficiency of the target service data can be improved.
The embodiment of the present specification provides a method for processing service data, which may be specifically applied to a system including a server and a user terminal. In particular, reference may be made to fig. 1. The user terminal and the server can be connected in a wired or wireless mode so as to carry out specific data interaction.
In this embodiment, the server may specifically include a background server that is applied to a service data processing platform side and is capable of implementing functions such as data transmission and data processing. Specifically, the server may be, for example, an electronic device having data operation, storage function and network interaction function. Alternatively, the server may be a software program running in the electronic device and providing support for data processing, storage and network interaction. In the present embodiment, the number of servers is not particularly limited. The server may specifically be one server, or may also be several servers, or a server cluster formed by several servers.
In this embodiment, the user terminal may specifically include a front-end electronic device that is applied to a user side and can implement functions such as data acquisition and data transmission. Specifically, the user terminal may be, for example, a desktop computer, a tablet computer, a notebook computer, a smart phone, and the like. Alternatively, the user terminal may be a software application capable of running in the electronic device. For example, it may be some bank APP running on a smartphone, etc.
In this embodiment, before specifically processing the service data, the server may query the historical database of the platform to obtain a plurality of template processing flows related to the service data processing.
Then, the server can construct a plurality of flow directed graphs according to the plurality of template processing flows; the flow directed graph comprises a plurality of associated processing nodes, and the processing nodes with the bearing relationship are connected through directed edges.
Furthermore, the server can perform preset breadth-first traversal on the multiple processing flow directed graphs respectively according to preset processing rules so as to determine processing nodes belonging to the same level in each processing flow directed graph; and according to a preset marking rule, corresponding level marks are set for processing nodes belonging to the same level in each processing flow digraph, so that the level marks of the processing nodes contained in each template processing flow can be determined by performing preset breadth-first traversal processing and preset node level mark processing on the processing flow digraph.
Finally, the server may store the hierarchical labels of the processing nodes in the template processing flows to obtain the preset node hierarchical label list.
When the service data is specifically processed, the user terminal can respond to the user operation to generate and send the target service data to be processed to the server.
And the server receives the target service data to be processed, and determines a template processing flow matched with the target service data as a target processing flow.
Further, the server can determine the level mark of the processing node in the target processing flow by inquiring a preset node level mark list.
Then, the server may invoke, level by level, a plurality of processing nodes of the same level to process the target service data in parallel according to the level marks of the processing nodes in the target processing flow, complete data processing on the target service data, obtain a data processing result of the target service data, and feed back the data processing result of the target service data to the user terminal.
And the user terminal receives and displays the data processing result of the target service data to the user.
Through the embodiment, the server does not need to additionally consider the dependency relationship in the processing process, so that the processing complexity is simplified; and moreover, the processing nodes of the same level can be called level by level according to the level marks to process the target service data in parallel, so that the processing efficiency is improved.
Referring to fig. 2, an embodiment of the present specification provides a method for processing service data. The method is particularly applied to the server side. In particular implementations, the method may include the following.
S201: acquiring target service data to be processed;
s202: determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes;
s203: inquiring a preset node level mark list, and determining the level mark of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance;
s204: and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
Through the embodiment, the server does not need to additionally consider the dependency relationship in the processing process when processing the target business data, and can introduce and process the target business data in the same hierarchy by using a parallel processing mode, so that the processing process of the target business data can be simplified, the target business data can be efficiently processed, and the waiting time of a user is reduced.
In this embodiment, the preset node hierarchy flag list stores a hierarchy flag of each processing node included in each of the template processing flows in the template processing flow.
The template processing flows may be obtained by clustering in advance according to the processing flows of a large amount of historical service data in the acquired application scene. The plurality of template processing flows may cover a majority of the business data processing flows occurring in the application scenario.
In this embodiment, the preset node level flag list may be obtained by performing preset breadth-first traversal processing and preset node level flag processing on a plurality of process flow directed graphs constructed based on a plurality of template process flows in advance. The specific process of establishing the predetermined node hierarchy flag list will be described later.
In an embodiment, the target service data may specifically include a credit evaluation request to be audited, and the like.
In some application scenarios (e.g., credit service application scenarios), when a user needs to apply for service data (e.g., specific credit amount) of some credit services (e.g., house loan service, small amount loan service, credit loan, etc.), a corresponding credit evaluation request is often generated and submitted to the platform server.
The server can perform multiple auditing (approval) processing on the credit condition of the user according to the credit evaluation request to obtain a credit evaluation result aiming at the user; the server may then determine whether to provide the service data of the requested credit service to the user based on the credit evaluation result.
Through the embodiment, the service data processing method provided by the specification can be applied to a credit service application scene, so that credit evaluation can be efficiently performed for a user applying for a credit service.
Of course, the above listed target service data is only an illustrative illustration. In specific implementation, the target service data may further include other types of service data according to a specific application scenario and a processing requirement. For example, the target business data may also be order data to be processed in a shopping mall scene, or archive data to be examined and approved in an archive borrowing scene, and the like. The specific type of the target service data and the application scenario are not limited in this specification.
In an embodiment, taking a credit evaluation request in a credit service application scenario as an example, the target service data further carries identity information of a target user and service information of a credit service for which the credit evaluation request to be audited is directed; correspondingly, the determining of the template processing flow matched with the target service data may include the following steps in specific implementation:
s1: inquiring a user database according to the identity information of the target user to determine a user tag of the target user; wherein the user tag comprises at least one of: a user grade label, a user activity label and a user occupation label;
s2: determining the service type of the credit service applied by the target user, the data value of the service data and the returning mode of the service data according to the service information of the credit service;
s3: and determining a matched target processing flow from the plurality of template processing flows according to the user label of the target user, the service type of the credit service applied by the target user, the data value of the service data and the returning mode of the service data.
Through the embodiment, the server can accurately determine the target processing flow with high processing matching degree with the target service data, so that the target service data can be accurately processed according to the target processing flow in the following process.
In this embodiment, the identity information of the target user may be specifically understood as identification information for indicating the target user. Specifically, the identity information of the target user may be a name of the target user, an account number of the target user, and the like.
In this embodiment, taking a credit service application scenario as an example, the user level tag may be specifically determined according to a VIP level of the target user in the platform account. The user activity label may be specifically determined according to the number of interactions that have occurred between the target user and the platform in the last period of time. The user occupation tag may be specifically determined according to occupation information filled in by the target user during registration.
In this embodiment, the platform may hold a user database of the entire number of users, and the platform may periodically update the user database according to user data input by the user.
In specific implementation, the server can find the user database of the target user according to the identity information of the target user; and further, the user database of the target user can be inquired to obtain the user tag of the target user.
In this embodiment, taking a credit service application scenario as an example, when a user applies for a specific credit service, the user often inputs, for example, a service type of the applied credit service (e.g., a house loan service, a credit card service, or a small loan service), a data value of the applied service data (e.g., a total loan amount of the house loan service, etc.), according to an instruction; meanwhile, the target user selects the application information such as the accepted return mode (for example, return by 12 months) of the service data about the applied credit service, and the like according to the specific situation.
Correspondingly, after receiving a credit evaluation request initiated by a target user, the server can acquire the service information of the credit service by inquiring the application information input by the target user and stored by the platform.
In this embodiment, the server may perform matching and screening on the plurality of template processing flows according to the user tag of the target user, and specific service information such as the service type of the credit service applied by the target user, the data value of the service data, and the returning mode of the service data, so as to find the template processing flow with the highest matching degree with the data processing of the credit service applied by the target user as the target processing flow.
In this embodiment, after determining the target processing flow, the server may determine, according to the target processing flow, a plurality of associated processing nodes involved in processing the target service data. Further, the server may determine the hierarchical label of each processing node corresponding to the target processing flow by querying a preset node hierarchical label list.
Processing nodes marked with the same hierarchy in the target processing flow can be understood as processing nodes belonging to the same hierarchy. Because the processes of processing the target service data by the processing nodes in the same hierarchy are independent. Therefore, the processing nodes of the same hierarchy can be called in a parallel mode to process the target service data simultaneously, so that the processing efficiency of the target service data can be improved.
In an embodiment, the invoking, level by level, a plurality of processing nodes of each level according to the level label of the processing node in the target processing flow to process the target service data in parallel may include, in specific implementation, the following: calling a plurality of processing nodes of the current level to process the target service data in parallel according to the following modes:
s1: determining a hierarchy mark of a current hierarchy, and determining a processing node of the hierarchy mark as the hierarchy mark of the current hierarchy as a processing node of the current hierarchy from the target processing flow;
s2: counting the number of the processing nodes of the current level;
s3: obtaining a plurality of target service data through data backup according to the node number of the processing node of the current level; wherein the number of the plurality of target service data is equal to the number of nodes of the processing node of the current hierarchy;
s4: respectively sending the target service data to a plurality of processing nodes of the current level;
s5: and calling the processing nodes of the plurality of current levels, and processing the received target service data in parallel.
Through the embodiment, the server can call a plurality of processing nodes belonging to the same level to simultaneously process the target service data in a parallel mode, so that the processing efficiency of the target service data can be effectively improved.
In this embodiment, in a specific implementation, according to the hierarchy identifier, in the order from small to large of the hierarchy identifier, multiple processing nodes of each hierarchy may be invoked level by level to process the target service data in parallel until all processing nodes of all hierarchies in the target processing flow are invoked.
Specifically, for example, as shown in fig. 3, the target processing flow includes 4 levels, and the level identifiers are: level 1, level 2, level 3, level 4. Where there is only one processing node belonging to level 1, two processing nodes belonging to level 2, three processing nodes belonging to level 3, and only one processing node belonging to level 4.
When the accessed target service data is specifically processed, the server may first invoke the processing node of level 1 to process the target service data; after the processing of the level 1 is finished, two processing nodes of the level 2 are called to process the target service data in parallel; after the processing of the level 2 is finished, three processing nodes of the level 3 are called to process the target service data in parallel; after the processing of the level 3 is completed, a processing node of the level 4 is called again to process the target service data. Thus, the processing of each hierarchy is executed hierarchy by hierarchy, and the processing of the target service data is completed.
In one embodiment, when specifically processing the target service data, the server may invoke, by invoking corresponding processing threads, a plurality of processing nodes of each hierarchy level to process the target service data in parallel by invoking the corresponding processing threads according to the hierarchical labels of the processing nodes in the target processing flow.
In one embodiment, it is considered that in some more complex application scenarios, there will often be associations between different levels, rather than complete independence.
For example, taking a credit service application scenario as an example, when a credit condition of a target user is checked in response to a checking evaluation request in a certain level, the user who finds that the target user belongs to a risk list of a platform may actually obtain a negative credit evaluation result directly without wasting processing time and processing resources, and perform a next level of checking on a historical transaction record of the target user.
Therefore, after the processing node of a certain level is called to process the target service data in parallel, the whole processing result of the level can be summarized and determined; and determining whether to trigger the next-level service data processing according to the processing result of the level. Thereby, the overall processing efficiency can be further improved.
In an embodiment, after invoking the processing nodes of the multiple current hierarchies and processing the received target service data in parallel, when the method is implemented, the following may be further included:
s1: receiving a node processing result fed back by the processing node of the current level for processing the received target service data;
s2: inquiring preset weight configuration parameters, and determining a preset weight corresponding to the processing node of the current level;
s3: and counting to obtain the processing result of the current level of the target data according to the node processing result fed back by the processing node of the current level and the corresponding preset weight.
Through the embodiment, after the received target service data is processed in parallel by calling the plurality of processing nodes of the current level, the server can also obtain and integrate the node processing results fed back by the processing nodes, so that the accurate processing result of the current level can be obtained, and the integral processing condition of the current level can be reflected.
In this embodiment, the preset weight configuration parameter may specifically include a preset weight of a node processing result for a processing node in each template processing flow.
The preset weight configuration parameter may be obtained in advance according to a logical relationship between processing nodes in the same hierarchy and a degree of association (or importance) with the target service data processing. The manner of obtaining the preset weight configuration parameters will be described later.
In an embodiment, after the processing result about the current hierarchy of the target data is obtained through statistics, when the method is implemented, the following may be further included:
s1: comparing the processing result of the current level with a preset reference result threshold value to obtain a corresponding comparison result;
s2: determining whether to call a processing node of a next level of the current level to process the target service data in parallel according to the comparison result;
s3: and under the condition that the processing node of the next level of the current level is determined not to be called to process the target service data in parallel according to the comparison result, finishing the data processing of the target service data.
By the embodiment, whether the processing node of the next hierarchy needs to be called to continuously process the target service data can be judged by fully utilizing the processing result of the current hierarchy. And under the condition that the data processing result of the target service data can be obtained based on the processing result of the current hierarchy, the processing of the subsequent hierarchy can be stopped in time, and the data processing of the target service data is finished, so that the processing efficiency of the target service data can be further improved, and the consumption of processing resources is reduced.
In this embodiment, according to the comparison result, it may be determined that the processing node of the next hierarchy of the current hierarchy does not need to be called again to process the target service data in parallel under the condition that it is determined that the data processing result of the target service data can be directly obtained; and then, the subsequent data processing of the target service data can be directly finished, and the data processing result of the target service data is generated and fed back to the target user based on the current processing result.
Specifically, a credit service application scenario is taken as an example. And the processing node of the current level is a credit risk list retrieval node. When the processing node of the current level retrieves the target user from the credit risk list, it may be determined that the target user belongs to the credit risk user as a processing result of the current level. And then, the credit evaluation result of the target user as a risk credit user is directly generated as a final target service data processing result without calling a processing node of the next level to continuously audit other credit conditions of the target user, and the subsequent credit condition audit aiming at the target user is ended.
In one embodiment, after the target service data is processed by calling the processing nodes of the same hierarchy level by level in the above manner, a final data processing result of the target service data can be obtained; and further data processing can be carried out according to the data processing result of the target service data.
In an embodiment, after invoking, level by level, a plurality of processing nodes of the same level in the above manner to process the target service data in parallel, when the method is implemented, the method may further include: acquiring processing results of a plurality of levels in a target processing flow; and integrating the processing results of the multiple levels to obtain a final data processing result of the target service data.
In one embodiment, in implementation, the data processing result of the target service data is obtained by performing weighted summation on the processing results of the multiple hierarchies.
In an embodiment, taking a credit service application scenario as an example, after calling a plurality of processing nodes of the same hierarchy level by level to process the target service data in parallel, when the method is implemented, the following may be further included:
s1: acquiring a processing result of each level in a target processing flow;
s2: determining a credit evaluation result of the target user according to the processing result of each level;
s3: and determining whether to send the service data of the credit service applied to the target user or not according to the credit evaluation result of the target user.
By the embodiment, the credit evaluation result of the target user can be determined more efficiently and accurately, and the service data of the applied credit service can be timely sent to the target user with good credit approved by credit according to the credit evaluation result, so that the waiting time of the user can be effectively reduced, and the use experience of the user is improved.
In this embodiment, in specific implementation, according to a credit evaluation result of a target user (i.e., a data processing result for finally obtaining target service data), when it is determined that the target user belongs to a user with good credit, it is determined that a credit service applied by the target user passes auditing; and further generating and sending the prompt information that the audit is passed to the target user, and sending the service data of the applied credit service to the target user.
On the contrary, according to the credit evaluation result of the target user, under the condition that the target user is determined to belong to the credit risk user, determining that the credit business audit applied by the target user is not passed; and then, prompt information that the approval fails can be generated and sent to the target user, and the service data of the applied credit service is refused to be sent to the target user.
In this embodiment, when specifically processing the service data, the server may determine a target processing flow matched with the target service data; determining the level marks of all processing nodes contained in the target processing flow by inquiring a preset node level mark list; and then, according to the level marks of the processing nodes, calling the processing nodes of the same level by level to process the target service data in parallel, thereby effectively reducing the processing complexity of the target service data, improving the processing efficiency of the target service data, and solving the technical problems of complex processing process and low processing efficiency of the existing method.
In an embodiment, before specifically processing the service data, a preset node level mark list meeting the requirement may be constructed in advance through preset breadth-first traversal processing and preset node level mark processing.
In an embodiment, the preset node hierarchy flag list may be specifically established in the following manner:
s1: acquiring a plurality of template processing flows related to business data processing;
s2: constructing a plurality of flow directed graphs according to the plurality of template processing flows; the flow directed graph comprises a plurality of associated processing nodes, and the processing nodes with the bearing relation are connected through directed edges;
s3: according to a preset processing rule, respectively performing preset breadth-first traversal on the plurality of processing flow directed graphs so as to determine processing nodes belonging to the same level in each processing flow directed graph;
s4: according to a preset marking rule, corresponding level marks are set for processing nodes belonging to the same level in each processing flow directed graph;
s5: and storing the hierarchical marks of the processing nodes in each template processing flow to obtain the preset node hierarchical mark list.
Through the embodiment, the preset breadth-first traversal processing and the preset node level marking processing can be performed on the processing flow directed graphs constructed based on the template processing flows, the processing nodes belonging to the same level in each template processing flow can be accurately found out and correspondingly marked, and therefore the preset node level marking list storing the level marks of the processing nodes in each template processing flow can be obtained more efficiently, and the subsequent direct use is facilitated.
In an embodiment, when constructing the corresponding directed graph according to the template process flow, the specific implementation may include the following: determining a plurality of processing nodes contained in the template processing flow and a bearing relation between different processing nodes according to the template processing flow; drawing a plurality of corresponding processing nodes according to the plurality of processing nodes contained in the template processing flow; and connecting the processing nodes with the bearing relation by using corresponding directed edges according to the bearing relation among different processing nodes to obtain the flow directed graph.
In an embodiment, the performing, according to a preset processing rule, a preset breadth-first traversal on the plurality of process flow directed graphs respectively to determine processing nodes belonging to the same level in each process flow directed graph may include the following steps: determining the processing nodes belonging to the current level in the current processing flow directed graph according to the following modes:
s1: determining a processing node of a previous level from the current processing flow directed graph according to a level mark of the previous level of the current level;
s2: detecting whether a processing node of the previous level has a directed edge from the processing node;
s3: and under the condition that the existence of the directed edge starting from the processing node of the previous hierarchy is detected, determining the processing node pointed by the directed edge as the processing node of the current hierarchy.
Through the embodiment, the processing nodes belonging to the current level can be found out efficiently and comprehensively through breadth-first traversal, and then the corresponding marking processing can be carried out on the processing nodes belonging to the current level.
In one embodiment, when determining a processing node belonging to the first hierarchical level in the current processing flow directed graph, the following steps may be performed: and searching all processing nodes in the current processing flow directed graph, and finding out the processing node with the in-degree of 0 as the processing node of the first level.
According to the above manner, the processing nodes of each level in the processing flow directed graph can be determined level by level from the first level, and then the processing nodes of each level can be marked to obtain a preset node level mark list.
In an embodiment, the obtaining of the plurality of template processing flows related to the service data processing may include the following steps: acquiring a historical service data processing record; extracting a plurality of processing flows of the historical service data according to the historical service data processing records; and clustering the processing flows of the historical service data to obtain a plurality of template processing flows related to service data processing.
By the embodiment, a plurality of template processing flows which are relatively comprehensive and can cover most of service data processing possibly occurring in the targeted application scene can be obtained, so that the template processing flows can be conveniently and efficiently utilized subsequently to accurately perform corresponding processing on the accessed target service data.
In an embodiment, after obtaining the preset node hierarchy flag list, when the method is implemented, the following may be further included:
s1: determining the logical relationship among processing nodes belonging to the same level in the processing flow directed graph according to the template processing flow;
s2: determining the association degree between processing nodes belonging to the same level and target service data processing according to the template processing flow;
s3: configuring corresponding preset weights for the processing nodes belonging to the same level according to the logical relationship between the processing nodes belonging to the same level and the association degree of the processing nodes and the target service data processing;
s4: and storing the preset weight of the processing node in the template processing flow to obtain a preset weight configuration parameter.
Through the embodiment, the corresponding preset weight can be accurately configured for the different processing nodes in each level according to the mutual relation between the different processing nodes in the same level in the target processing flow and the relevance between the processing nodes and the target service data, so that the preset weight configuration parameters with better effect can be obtained.
In an embodiment, the logical relationship between the processing nodes belonging to the same hierarchy may specifically include at least one of: and, or, not, etc. Of course, the above listed logical relationships are only illustrative. In specific implementation, other logical relationships may be introduced according to specific application scenarios and processing requirements. The present specification is not limited to these.
In this embodiment, specifically, for example, if the logical relationship between two processing nodes (e.g., processing node 1 and processing node 2) in the same hierarchy is: and then, the two processing nodes can be judged to have the same influence on the processing result of the hierarchy based on the logical relationship layer. At this time, the preset weights of the two processing nodes may be configured to be the same value of weight.
For another example, if the processing node 1 of the two processing nodes in the same hierarchy is associated with the data processing of the target traffic data with a higher degree than the processing node 2, it may be determined that the processing node 1 has a greater influence on the data processing of the target traffic data than the processing node 2. At this time, the processing node 1 may be configured with a preset weight having a relatively large value, and the processing node 2 may be configured with a preset weight having a relatively small value.
In an embodiment, the service data processing method provided in this specification may be applied to efficiently process the batch of accessed service data.
As can be seen from the above, in the method for processing business data provided in the embodiments of the present specification, before processing business data, a plurality of corresponding processing flow directed graphs may be constructed based on a plurality of template processing flows; and respectively carrying out preset breadth-first traversal processing and preset node level marking processing on the processing flow directed graphs to obtain a preset node level marking list in which the level marks of the processing nodes in each template processing flow are stored. When processing the service data, a target processing flow matched with the target service data can be determined; determining the level marks of all processing nodes contained in the target processing flow by inquiring a preset node level mark list; and then, according to the level marks of the processing nodes, calling the processing nodes of the same level by level to process the target service data in parallel, so that the processing complexity of the target service data can be reduced, and the processing efficiency of the target service data can be improved.
Referring to fig. 4, the present specification further provides another service data processing method, which may include the following steps:
s401: acquiring target service data to be processed;
s402: determining a target processing flow related to the target business data; wherein the target processing flow comprises a plurality of associated processing nodes;
s403: constructing a corresponding processing flow directed graph according to the target processing flow;
s404: performing preset breadth-first traversal processing and preset node level marking processing on the processing flow directed graph to obtain a level mark of a processing node in a target processing flow;
s405: and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
In this embodiment, the target process flow may be a new process flow.
Through the embodiment, the processing flow directed graph corresponding to the target data processing flow can be generated firstly; performing preset breadth-first traversal processing and preset node level marking processing on the processing flow directed graph to obtain level marks of processing nodes in the target processing flow on line; and then, according to the level marks of the processing nodes in the target processing flow, calling the processing nodes of the same level by level to process the target service data in parallel, so as to reduce the processing complexity of the target service data processing and improve the processing efficiency of the target service data.
Embodiments of the present specification further provide a server, including a processor and a memory for storing processor-executable instructions, where the processor, when implemented, may perform the following steps according to the instructions: acquiring target service data to be processed; determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes; inquiring a preset node level mark list, and determining the level mark of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance; and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
In order to more accurately complete the above instructions, referring to fig. 5, another specific server is provided in the embodiments of the present specification, wherein the server includes a network communication port 501, a processor 502 and a memory 503, and the above structures are connected by an internal cable, so that the structures can perform specific data interaction.
The network communication port 501 may be specifically configured to acquire target service data to be processed.
The processor 502 may be specifically configured to determine a template processing procedure matched with the target service data, as a target processing procedure; wherein the target processing flow comprises a plurality of associated processing nodes; inquiring a preset node level mark list, and determining the level mark of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance; and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
The memory 503 may be specifically configured to store a corresponding instruction program.
In this embodiment, the network communication port 501 may be a virtual port that is bound to different communication protocols, so that different data can be sent or received. For example, the network communication port may be a port responsible for web data communication, a port responsible for FTP data communication, or a port responsible for mail data communication. In addition, the network communication port can also be a communication interface or a communication chip of an entity. For example, it may be a wireless mobile network communication chip, such as GSM, CDMA, etc.; it can also be a Wifi chip; it may also be a bluetooth chip.
In this embodiment, the processor 502 may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth. The description is not intended to be limiting.
In this embodiment, the memory 503 may include multiple layers, and in a digital system, the memory may be any memory as long as binary data can be stored; in an integrated circuit, a circuit without a physical form and with a storage function is also called a memory, such as a RAM, a FIFO and the like; in the system, the storage device in physical form is also called a memory, such as a memory bank, a TF card and the like.
An embodiment of the present specification further provides a computer storage medium based on the foregoing service data processing method, where the computer storage medium stores computer program instructions, and when the computer program instructions are executed, the computer storage medium implements: acquiring target service data to be processed; determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes; inquiring a preset node level mark list, and determining the level mark of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance; and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
In this embodiment, the storage medium includes, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk Drive (HDD), or a Memory Card (Memory Card). The memory may be used to store computer program instructions. The network communication unit may be an interface for performing network connection communication, which is set in accordance with a standard prescribed by a communication protocol.
In this embodiment, the functions and effects specifically realized by the program instructions stored in the computer storage medium can be explained by comparing with other embodiments, and are not described herein again.
Referring to fig. 6, in a software level, an embodiment of the present specification further provides a device for processing service data, where the device may specifically include the following structural modules.
The obtaining module 601 may be specifically configured to obtain target service data to be processed;
the determining module 602 may be specifically configured to determine a template processing procedure matched with the target service data, as a target processing procedure; wherein the target processing flow comprises a plurality of associated processing nodes;
the query module 603 may be specifically configured to query a preset node hierarchy marker list, and determine a hierarchy marker of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance;
the processing module 604 may be specifically configured to invoke, level by level, multiple processing nodes in the same level to process the target service data in parallel according to the level flag of the processing node in the target processing flow.
It should be noted that, the units, devices, modules, etc. illustrated in the above embodiments may be implemented by a computer chip or an entity, or implemented by a product with certain functions. For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. It is to be understood that, in implementing the present specification, functions of each module may be implemented in one or more pieces of software and/or hardware, or a module that implements the same function may be implemented by a combination of a plurality of sub-modules or sub-units, or the like. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The present specification further provides another service data processing apparatus, which may specifically include: the acquisition module is used for acquiring target service data to be processed; the determining module is used for determining a target processing flow related to the target service data; wherein the target processing flow comprises a plurality of associated processing nodes; the construction module is used for constructing a corresponding processing flow directed graph according to the target processing flow; the first processing module is used for performing preset breadth-first traversal processing and preset node hierarchy marking processing on the processing flow directed graph to obtain a hierarchy mark of a processing node in a target processing flow; and the second processing module is used for calling a plurality of processing nodes of the same level by level to process the target service data in parallel according to the level marks of the processing nodes in the target processing flow.
Therefore, the service data processing device provided by the embodiments of the present specification can effectively reduce the processing complexity of the target service data and improve the processing efficiency of the target service data.
In a specific scenario example, the service data processing method provided in this specification may be applied to perform a specific credit approval task. The specific implementation process can be executed by referring to the following contents.
In this scenario example, before the specific approval, each node in the credit approval task flow may be marked (for example, hierarchical marking); when the examination and approval is carried out specifically, the examination and approval process is carried out in parallel according to the mark groups (for example, hierarchical levels), so that the examination and approval process is parallelized to the maximum extent under the condition that the execution sequence is controllable, the total execution time of the whole process is reduced, and the customer experience is improved; meanwhile, the complexity of the scheduling program is low, and the scheduling program is convenient to write and maintain.
The specific design of the implementation may include the following steps.
Step 1: a directed graph G (e.g., a process flow directed graph) is created that contains all of the approval task nodes. Where each node (e.g., processing node) is a vertex in the graph G.
In this embodiment, a vertex with an in-degree of 0 in G can be found as a start node, which is denoted as Ns.
Step 2: each node is executed in a breadth-first manner, starting from the start node Ns, traversing the graph G. Specifically, the following substeps may be included.
Step 2-1: a queue Q is created, initially empty.
Step 2-2: the start node Ns is added to the tail of Q, and Ns is marked as 0.
Step 2-3: the current flag value c is defined to be 0.
Step 2-4: and acquiring nodes with the marks equal to the current mark value c from the head of the Q, and adding the nodes into the node set L until the first node with the mark different from the c is encountered, or no more nodes exist in the Q.
Step 2-5: all nodes in L are executed in parallel using multiple threads.
Step 2-6: all outgoing edges (e.g., directed edges) of all nodes in L are looked up in the graph G, and if any, the node Nt pointed to by these outgoing edges is marked with a flag value of +1 of the preceding node of the flag value Nt.
Step 2-7: these newly marked nodes Nt are added to the tail of queue Q.
Step 2-8: the current flag value c +1 is updated.
Step 2-9: check if Q is empty. And if the examination and approval task is empty, the examination and approval task is executed completely. Otherwise, go back to step 204.
Step 2-10: if any one of the steps is abnormal, the processing is stopped.
In the present scenario example, when marking, the start node may be marked as 0; during traversal, each node may be marked with a marking value of +1 for its predecessor node.
In this scenario example, when executed in parallel, the method may include: storing a queue of nodes to be executed; acquiring nodes from the queue according to the marks; the retrieved nodes are executed simultaneously using multiple threads.
When a credit approval task is specifically performed, for example, a certain financial company plans to carry out a cash staging service, the service processing time is required to be as short as possible to ensure the customer experience, and the method can be implemented according to the following specific steps:
step 3-1: an overall approval flowchart (e.g., a directed graph of process flow) may be created as shown in fig. 7.
In the present scenario example, each circle in the diagram represents a process node in the approval business process. Wherein Ns is the start node.
Step 3-2: mark the start node Ns as 0 (e.g., a level mark) and fetch all nodes marked as 0 to execute on multiple threads Ts. As can be seen in fig. 8.
Step 3-3: the next node labeled Ns is 1 and all nodes labeled 1 are acquired, executing on Ts.
Step 3-4: and circulating according to the steps until all the nodes are executed. As can be seen in fig. 9.
1. Through the scene example, the service data processing method provided by the specification is verified, the breadth-first traversal and marking mechanism is adopted, and in the credit approval task node execution process using the marking mechanism, marking processing is introduced in the breadth-first traversal process, so that all nodes acquired from the traversal queue at each time can be executed in parallel, and a scheduling program is simplified. Therefore, the execution time of the whole process can be effectively saved and the efficiency is improved based on parallel execution; a batch of nodes which can be executed in parallel can be directly obtained from the traversal queue based on a marking mechanism, a scheduler is not needed to analyze the dependency relationship among the nodes, the execution process is simplified, and the processing complexity is reduced.
Although the present specification provides method steps as described in the examples or flowcharts, additional or fewer steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an apparatus or client product in practice executes, it may execute sequentially or in parallel (e.g., in a parallel processor or multithreaded processing environment, or even in a distributed data processing environment) according to the embodiments or methods shown in the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. The terms first, second, etc. are used to denote names, but not any particular order.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, classes, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present specification can be implemented by software plus necessary general hardware platform. With this understanding, the technical solutions in the present specification may be essentially embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a mobile terminal, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments in the present specification.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The description is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
While the specification has been described with examples, those skilled in the art will appreciate that there are numerous variations and permutations of the specification that do not depart from the spirit of the specification, and it is intended that the appended claims include such variations and modifications that do not depart from the spirit of the specification.

Claims (15)

1. A method for processing service data is characterized by comprising the following steps:
acquiring target service data to be processed;
determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes;
inquiring a preset node level mark list, and determining the level mark of a processing node in a target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance;
and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
2. The method according to claim 1, wherein invoking a plurality of processing nodes of each hierarchy level by level according to the hierarchical labels of the processing nodes in the target processing flow to process the target service data in parallel comprises:
calling a plurality of processing nodes of the current level to process the target service data in parallel according to the following modes:
determining a hierarchy mark of a current hierarchy, and determining a processing node of the hierarchy mark as the hierarchy mark of the current hierarchy as a processing node of the current hierarchy from the target processing flow;
counting the number of the processing nodes of the current level;
obtaining a plurality of target service data through data backup according to the node number of the processing node of the current level; wherein the number of the plurality of target service data is equal to the number of nodes of the processing node of the current hierarchy;
respectively sending the target service data to a plurality of processing nodes of the current level;
and calling the processing nodes of the plurality of current levels, and processing the received target service data in parallel.
3. The method of claim 2, wherein after invoking the plurality of processing nodes of the current hierarchy to process the received target traffic data in parallel, the method further comprises:
receiving a node processing result fed back by the processing node of the current level for processing the received target service data;
inquiring preset weight configuration parameters, and determining a preset weight corresponding to the processing node of the current level;
and counting to obtain the processing result of the current level of the target data according to the node processing result fed back by the processing node of the current level and the corresponding preset weight.
4. The method of claim 3, wherein after counting the processing results for the current hierarchy of the target data, the method further comprises:
comparing the processing result of the current level with a preset reference result threshold value to obtain a corresponding comparison result;
determining whether to call a processing node of a next level of the current level to process the target service data in parallel according to the comparison result;
and under the condition that the processing node of the next level of the current level is determined not to be called to process the target service data in parallel according to the comparison result, finishing the data processing of the target service data.
5. The method of claim 1, wherein the predetermined node-level tag list is established by:
acquiring a plurality of template processing flows related to business data processing;
constructing a plurality of flow directed graphs according to the plurality of template processing flows; the flow directed graph comprises a plurality of associated processing nodes, and the processing nodes with the bearing relation are connected through directed edges;
according to a preset processing rule, respectively performing preset breadth-first traversal on the plurality of processing flow directed graphs so as to determine processing nodes belonging to the same level in each processing flow directed graph;
according to a preset marking rule, corresponding level marks are set for processing nodes belonging to the same level in each processing flow directed graph;
and storing the hierarchical marks of the processing nodes in each template processing flow to obtain the preset node hierarchical mark list.
6. The method according to claim 5, wherein performing a preset breadth-first traversal on the plurality of process flow directed graphs respectively according to a preset processing rule to determine processing nodes belonging to the same level in each process flow directed graph includes:
determining the processing nodes belonging to the current level in the current processing flow directed graph according to the following modes:
determining a processing node of a previous level from the current processing flow directed graph according to a level mark of the previous level of the current level;
detecting whether a processing node of the previous level has a directed edge from the processing node;
and under the condition that the existence of the directed edge starting from the processing node of the previous hierarchy is detected, determining the processing node pointed by the directed edge as the processing node of the current hierarchy.
7. The method of claim 5, wherein obtaining a plurality of template process flows associated with the business data process comprises:
acquiring a historical service data processing record;
extracting a plurality of processing flows of the historical service data according to the historical service data processing records;
and clustering the processing flows of the historical service data to obtain a plurality of template processing flows related to service data processing.
8. The method of claim 5, wherein after obtaining the preset list of node level markers, the method further comprises:
determining the logical relationship among processing nodes belonging to the same level in the processing flow directed graph according to the template processing flow;
determining the association degree between processing nodes belonging to the same level and target service data processing according to the template processing flow;
configuring corresponding preset weights for the processing nodes belonging to the same level according to the logical relationship between the processing nodes belonging to the same level and the association degree of the processing nodes and the target service data processing;
and storing the preset weight of the processing node in the template processing flow to obtain a preset weight configuration parameter.
9. The method of claim 1, wherein the target traffic data comprises a credit assessment request to be audited.
10. The method according to claim 9, wherein the target service data further carries identity information of a target user and service information of a credit service for which the credit evaluation request to be audited is directed;
correspondingly, the step of determining the template processing flow matched with the target service data comprises the following steps:
inquiring a user database according to the identity information of the target user to determine a user tag of the target user;
determining the service type of the credit service applied by the target user, the data value of the service data and the returning mode of the service data according to the service information of the credit service;
and determining a matched target processing flow from the plurality of template processing flows according to the user label of the target user, the service type of the credit service applied by the target user, the data value of the service data and the returning mode of the service data.
11. The method of claim 10, wherein after invoking multiple processing nodes of the same hierarchy level by level to process the target traffic data in parallel, the method further comprises:
acquiring a processing result of each level in a target processing flow;
determining a credit evaluation result of the target user according to the processing result of each level;
and determining whether to send the service data of the credit service applied to the target user or not according to the credit evaluation result of the target user.
12. A device for processing service data, comprising:
the acquisition module is used for acquiring target service data to be processed;
the determining module is used for determining a template processing flow matched with the target service data as a target processing flow; wherein the target processing flow comprises a plurality of associated processing nodes;
the query module is used for querying a preset node hierarchy mark list and determining the hierarchy mark of the processing node in the target processing flow; the preset node level mark list is obtained by performing preset breadth-first traversal processing and preset node level mark processing on a plurality of processing flow directed graphs established based on template processing flows in advance;
and the processing module is used for calling a plurality of processing nodes of the same level by level to process the target service data in parallel according to the level marks of the processing nodes in the target processing flow.
13. A method for processing service data is characterized by comprising the following steps:
acquiring target service data to be processed;
determining a target processing flow related to the target business data; wherein the target processing flow comprises a plurality of associated processing nodes;
constructing a corresponding processing flow directed graph according to the target processing flow;
performing preset breadth-first traversal processing and preset node level marking processing on the processing flow directed graph to obtain a level mark of a processing node in a target processing flow;
and calling a plurality of processing nodes of the same level by level according to the level marks of the processing nodes in the target processing flow to process the target service data in parallel.
14. A server comprising a processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the steps of the method of any one of claims 1 to 11.
15. A computer-readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1 to 11.
CN202110074027.5A 2021-01-20 2021-01-20 Service data processing method and device and server Pending CN112766907A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110074027.5A CN112766907A (en) 2021-01-20 2021-01-20 Service data processing method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110074027.5A CN112766907A (en) 2021-01-20 2021-01-20 Service data processing method and device and server

Publications (1)

Publication Number Publication Date
CN112766907A true CN112766907A (en) 2021-05-07

Family

ID=75703403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110074027.5A Pending CN112766907A (en) 2021-01-20 2021-01-20 Service data processing method and device and server

Country Status (1)

Country Link
CN (1) CN112766907A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326117A (en) * 2021-07-15 2021-08-31 中国电子科技集团公司第十五研究所 Task scheduling method, device and equipment
CN113361733A (en) * 2021-06-03 2021-09-07 建信金融科技有限责任公司 Processing method and device for reserved service
CN113506035A (en) * 2021-07-28 2021-10-15 中国工商银行股份有限公司 Method, device and equipment for determining approval process
CN113986575A (en) * 2021-10-25 2022-01-28 聚好看科技股份有限公司 Server and processing method of multi-level data
CN114090018A (en) * 2022-01-25 2022-02-25 树根互联股份有限公司 Index calculation method and device of industrial internet equipment and electronic equipment
CN114553727A (en) * 2022-02-18 2022-05-27 网宿科技股份有限公司 Data processing method and device based on content distribution network
CN115114028A (en) * 2022-07-05 2022-09-27 南方电网科学研究院有限责任公司 Task allocation method and device for electric power simulation secondary control
CN115277851A (en) * 2022-06-22 2022-11-01 聚好看科技股份有限公司 Service request processing method and system
CN115293655A (en) * 2022-09-30 2022-11-04 神州数码融信云技术服务有限公司 Flow control method and device, computer equipment and computer readable storage medium
CN115658325A (en) * 2022-11-18 2023-01-31 北京市大数据中心 Data processing method, data processing device, multi-core processor, electronic device, and medium
WO2023202005A1 (en) * 2022-04-19 2023-10-26 Zhejiang Dahua Technology Co., Ltd. Methods and systems for performing data processing tasks

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308602A (en) * 2018-08-15 2019-02-05 平安科技(深圳)有限公司 Operation flow data processing method, device, computer equipment and storage medium
CN111815169A (en) * 2020-07-09 2020-10-23 中国工商银行股份有限公司 Business approval parameter configuration method and device
CN111857984A (en) * 2020-06-01 2020-10-30 北京文思海辉金信软件有限公司 Job calling processing method and device in bank system and computer equipment
CN111914010A (en) * 2020-08-04 2020-11-10 北京百度网讯科技有限公司 Service processing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308602A (en) * 2018-08-15 2019-02-05 平安科技(深圳)有限公司 Operation flow data processing method, device, computer equipment and storage medium
CN111857984A (en) * 2020-06-01 2020-10-30 北京文思海辉金信软件有限公司 Job calling processing method and device in bank system and computer equipment
CN111815169A (en) * 2020-07-09 2020-10-23 中国工商银行股份有限公司 Business approval parameter configuration method and device
CN111914010A (en) * 2020-08-04 2020-11-10 北京百度网讯科技有限公司 Service processing method, device, equipment and storage medium

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361733A (en) * 2021-06-03 2021-09-07 建信金融科技有限责任公司 Processing method and device for reserved service
CN113326117A (en) * 2021-07-15 2021-08-31 中国电子科技集团公司第十五研究所 Task scheduling method, device and equipment
CN113506035A (en) * 2021-07-28 2021-10-15 中国工商银行股份有限公司 Method, device and equipment for determining approval process
CN113986575A (en) * 2021-10-25 2022-01-28 聚好看科技股份有限公司 Server and processing method of multi-level data
CN114090018A (en) * 2022-01-25 2022-02-25 树根互联股份有限公司 Index calculation method and device of industrial internet equipment and electronic equipment
CN114553727A (en) * 2022-02-18 2022-05-27 网宿科技股份有限公司 Data processing method and device based on content distribution network
WO2023202005A1 (en) * 2022-04-19 2023-10-26 Zhejiang Dahua Technology Co., Ltd. Methods and systems for performing data processing tasks
CN115277851B (en) * 2022-06-22 2023-06-06 聚好看科技股份有限公司 Service request processing method and system
CN115277851A (en) * 2022-06-22 2022-11-01 聚好看科技股份有限公司 Service request processing method and system
CN115114028A (en) * 2022-07-05 2022-09-27 南方电网科学研究院有限责任公司 Task allocation method and device for electric power simulation secondary control
CN115293655B (en) * 2022-09-30 2023-01-20 神州数码融信云技术服务有限公司 Flow control method and device, computer equipment and computer readable storage medium
CN115293655A (en) * 2022-09-30 2022-11-04 神州数码融信云技术服务有限公司 Flow control method and device, computer equipment and computer readable storage medium
CN115658325A (en) * 2022-11-18 2023-01-31 北京市大数据中心 Data processing method, data processing device, multi-core processor, electronic device, and medium
CN115658325B (en) * 2022-11-18 2024-01-23 北京市大数据中心 Data processing method, device, multi-core processor, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN112766907A (en) Service data processing method and device and server
US10872029B1 (en) System, apparatus and method for deploying infrastructure to the cloud
US10936659B2 (en) Parallel graph events processing
US9501562B2 (en) Identification of complementary data objects
US10785128B1 (en) System, apparatus and method for deploying infrastructure to the cloud
US9195509B2 (en) Identifying optimal platforms for workload placement in a networked computing environment
US9781020B2 (en) Deploying applications in a networked computing environment
WO2011134086A1 (en) Systems and methods for conducting reliable assessments with connectivity information
US20120278513A1 (en) Priority scheduling for multi-channel context aware communication technology
CN107862425B (en) Wind control data acquisition method, device and system and readable storage medium
US9680715B2 (en) Assessing a service offering in a networked computing environment
CN112800095A (en) Data processing method, device, equipment and storage medium
CN113157947A (en) Knowledge graph construction method, tool, device and server
CN109740129B (en) Report generation method, device and equipment based on blockchain and readable storage medium
US11418583B2 (en) Transaction process management by dynamic transaction aggregation
CN111752961A (en) Data processing method and device
CN109992614B (en) Data acquisition method, device and server
CN114048512B (en) Method and device for processing sensitive data
CN113256240B (en) Message processing method and device and server
CN113722141A (en) Method and device for determining delay reason of data task, electronic equipment and medium
CN115082057A (en) Method, device, electronic device, medium, and program product for intelligently following up arrears
CN111737274B (en) Transaction data processing method, device and server
CN111144091B (en) Customer service member determination method and device and group member identification determination method
CN113590604A (en) Service data processing method and device and server
CN114281549A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination