CN116521643A - Data processing method and device for supporting multiple execution engines based on twin platform - Google Patents
Data processing method and device for supporting multiple execution engines based on twin platform Download PDFInfo
- Publication number
- CN116521643A CN116521643A CN202310112605.9A CN202310112605A CN116521643A CN 116521643 A CN116521643 A CN 116521643A CN 202310112605 A CN202310112605 A CN 202310112605A CN 116521643 A CN116521643 A CN 116521643A
- Authority
- CN
- China
- Prior art keywords
- model
- data
- node
- layer
- execution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000004590 computer program Methods 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 9
- 230000014509 gene expression Effects 0.000 claims description 7
- 238000003860 storage Methods 0.000 claims description 6
- 238000013500 data storage Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 23
- 238000001514 detection method Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 241001481833 Coryphaena hippurus Species 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000013475 authorization Methods 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 241001125840 Coryphaenidae Species 0.000 description 1
- 241000288113 Gallirallus australis Species 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/211—Schema design and management
- G06F16/212—Schema design and management with details for data modelling support
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
- G06F16/288—Entity relationship models
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a data processing method and device for supporting multiple execution engines based on a twin platform, comprising the following steps: pulling a data element result into a message queue of the twin platform according to the token of each data element through a data element interface of the data element trading platform; and calling a complex event processing module, triggering an associated business model based on the data element result in the message queue so as to execute corresponding operation based on the business model, and writing the execution result into a data asset library. The method and the device can overcome the safety and compliance problems of the original data, and realize the support of the dispatching and task execution of the multi-execution engine based on the twin platform.
Description
Technical Field
The present disclosure relates to the field of data element technologies, and in particular, to a data processing method and apparatus for supporting multiple execution engines based on a twin platform.
Background
From the point of view of data security compliance, conventional systems directly interface with the original data, and compliance of the data security and data sources is severely challenged, and it cannot be ensured whether the original data is revealed through the system and whether the data sources are compliant.
From the construction and execution of the business model, the conventional system is highly coupled from the business model presentation layer to the model execution layer, and the change of the model execution layer can lead to the overall change of the overall system from top to bottom. Since the DAG execution with large data volume at present has a starting cost, the starting cost of these frameworks is very high no matter how large the model is needed to be calculated, and for supporting the quick execution of a simple service model based on small data volume and the complex calculation of a complex service model on a large-scale data set, how to loosely couple the model presentation layer to the execution layer in this context and support various execution engines at the same time, the demand is increasingly strong.
Disclosure of Invention
The embodiment of the application provides a data processing method and equipment for supporting multiple execution engines based on a twin platform, which are used for solving the problems of safety and compliance of original data and realizing the scheduling and task execution supporting the multiple execution engines based on the twin platform.
The embodiment of the application provides a data processing method for supporting multiple execution engines based on a twin platform, which comprises the following steps:
pulling a data element result into a message queue of the twin platform according to the token of each data element through a data element interface of the data element trading platform;
and calling a complex event processing module, triggering an associated business model based on the data element result in the message queue so as to execute corresponding operation based on the business model, and writing the execution result into a data asset library.
Optionally, the construction and execution flow of the service model are implemented through a model presentation layer, a model abstract layer and a model execution layer, wherein the model presentation layer is configured to:
acquiring an associated main body related to a target service corresponding to the data element result, and determining a transacting transaction of the associated main body;
providing canvas, configuring attribute information of a service model required by a target service in a visual mode, and establishing a connection relationship between the service model and an associated main body to form a directed map; and
and pulling target data fields in technical metadata into the canvas, and constructing a connection relationship between the target data fields and attribute information of the corresponding service model so as to expand the directed graph.
Optionally, the model presentation layer adopts the following data storage mode:
acquiring model information and layout information which are transmitted to a background by the front end according to a JSON structure;
according to the JSON structure, storing the JSON structure into a model diagram data table of the model display layer to record node information of each first node in the directed graph and the relation among the nodes, wherein the node information comprises node codes, node names, node types, service attributes contained in the nodes, related operators, conditions and expressions, and the relation among the nodes records the preamble node codes of each node.
Optionally, the model abstract layer is configured to convert the directed graph of the JSON structure stored by the model presentation layer into a structure of the abstract layer, and the directed graph corresponds to one or more second nodes of the abstract layer according to the first nodes of the presentation layer; and
and converting the service model into a node list and node contexts, wherein each node context corresponds to one operator in the model presentation layer node, and any node context consists of node input parameters and output parameters, wherein the input parameters comprise related data tables and data fields, and the output parameters comprise output intermediate tables and data fields.
Optionally, the model execution layer is configured to:
creating corresponding task nodes according to the content of the node context of the model abstract layer;
task scheduling is performed using a unified type of task package to perform tasks based on the created task nodes.
Optionally, executing the task based on the created task node includes executing the task node corresponding to the service model according to the triggering condition of the service model, so as to obtain an execution result.
The embodiment of the application also provides computer equipment, which comprises a processor and a memory, wherein the memory is stored with a computer program, and the computer program realizes the steps of the data processing method based on the twin platform supporting multiple execution engines when being executed by the processor.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a computer program, and the computer program realizes the steps of the data processing method based on the twin platform supporting multiple execution engines when being executed by a processor.
The method and the device can overcome the safety and compliance problems of the original data, and realize the support of the dispatching and task execution of the multi-execution engine based on the twin platform.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is an example of a three-tier architecture for the construction and execution of a business model in an embodiment of the present application;
fig. 2 is a specific example of service model construction and execution of a three-layer architecture according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the application provides a data processing method for supporting multiple execution engines based on a twin platform, which comprises the following steps:
according to the data processing method, results aiming at data elements are integrated, a main body 360-degree micro database is built on the basis of the data elements, original data in the example are physically isolated by the unidirectional optical gate, the data element results are the data element library which is dynamically ferred to the outside of the optical gate according to an authorization protocol after being processed by an element processing center in the data vault, and the safety and compliance problems of the original data are mechanically overcome.
The data element is a data feature formed by modeling a data set formed by a plurality of related fields or an associated field of data as required after the data is desensitized.
The data vault refers to a data storage and management facility which is supervised by a governing department, unified, independently controllable, safe and reliable, and stores core data, important data, sensitive data and data elements. Is deployed primarily in governments, organizations, industries and large enterprises. The vault referred to herein is located within the area where the shutters are physically isolated.
The element processing center refers to a large-scale, full-flow and automatic data element processing production line for realizing the development and management of data elements from data collection to the full life cycle of data element processing transaction.
Ferrying means that the data element result is sent by instruction to the out-of-shutter area via a specific protocol, here the UDP protocol, which is a unidirectional protocol.
The data element results are data sets of data elements defined after passing the inspection and processed by the element processing center.
After the data vault ferrying the data element results to the data element library, in step S101, the data element results are pulled into the message queue of the twin platform according to the token of each data element through the data element interface of the data element transaction platform. The data element result can be pulled into the message queue Kafka of the twin platform in an http manner according to the token of each element.
In step S102, a complex event processing module is invoked, based on the data element results in the message queue, an associated business model is triggered to perform corresponding operations based on the business model, and the execution results are written into a data asset library.
The method comprises the steps that a flink of a twin platform receives data element result data through monitoring a topic appointed by Kafka, sink operation is carried out through two channels, one operation is sink to a data resource library of the twin platform, and the resource library currently uses mysql; another operation is to invoke CEP (complex event processing) modules to trigger the associated business model based on the received data element results. In the example, the twin platform is a capability platform, based on which a user can construct a data ontology model, support rapid model formulation and dynamic combination, and support model expansion and deduction customization. The data resource library refers to a database where data objects capable of being operated by the model execution layer are located, and currently, data elements are stored in the database through a sink operation of a link.
An exemplary CEP module triggers the implementation of the business model as follows:
trigger conditions defining events are as follows: when a rule expression engine in the CEP module detects that the value of an epidemic related xx detection field (xxjc) in personnel element result (person) data is positive, the execution of a corresponding propagation chain model is triggered.
When $person. Xxjc= 'positive'
Then
Code that triggers execution of a corresponding propagation chain model
End
The designated business model is triggered to execute, and the execution result is written into the data asset library, namely the 360-degree micro database of the building subject. The construction and execution of a service model and how service personnel twine the service of the physical world through the system are important points of the application, and how the execution result is to construct the 360-degree micro database of the main body is completed in an existing mode, so that specific description is omitted here, and the method is only used as a precondition. The main body 360-degree micro database and the corresponding database table field form technical metadata, namely a technical metadata map.
The method and the device can overcome the safety and compliance problems of the original data, and realize the support of the dispatching and task execution of the multi-execution engine based on the twin platform.
In some embodiments, the construction and execution flow of the business model are supported by a model presentation layer, a model abstract layer and a model execution layer three-layer architecture, so as to realize loose coupling between layers, as shown in fig. 1 and 2, wherein the model presentation layer is configured to:
and acquiring the association subject related to the target service corresponding to the data element result, and determining the transacting transaction of the association subject.
Providing canvas, configuring attribute information of a service model required by a target service in a visual mode, and establishing a connection relationship between the service model and an associated main body to form a directed map;
the model presentation layer in this example supports the graphical interface to construct a business process by dragging, and for each node in the business process model, the business person designates the ontology involved in the node and the operation to be performed on the ontology.
In this example, a scenario of a network service Q is taken as an example, where the service Q involves 3 bureaus, including an a-office, a B-office, and a C-office, where the bureaus may have different transactions, and the a-office related transaction is an a-transaction and a B-transaction, and the B-office related transaction is a B-transaction and a C-transaction, where the B-transaction needs to be commonly initiated by the a-office and the B-office. Each transaction will associate an associated license and form, e.g., b transaction requires two licenses, E-license, license F, the associated form being an associated security personnel registry and an associated security device registry.
The scheme provides a canvas mode to enable service personnel to configure service models, namely service metadata, in a visual mode, wherein each service model is related to technical metadata, namely a license and a specific library table field corresponding to a service form.
Taking the configuration process of handling the service model corresponding to the service Q as an example, the specific service metadata construction adopts the following process: three principal offices are drawn from the ontology classification, named a-office, B-office and C-office, respectively, and then based on the canvas provided drawing out items named a-item, B-item and C-item, the license and corresponding form are drawn out in the same way. For the corresponding attribute of the license and the form, such as license F in the license, the related security personnel register list related personnel related information. The relationship between each client and the item is represented by selecting the corresponding client and item and pulling a directed line from the client to the item. In this way, the business logic is input into the system by the business model, namely the construction process of the business metadata map.
The canvas in this example may be implemented using VUE2 standard functionality, and operators encapsulated using this technique, including data filtering, data collision, and mathematical calculations for certain fields. The operator referred to in this example is a filtering operation on the principal, that is, the filtering operator in this example is the transaction principal of the region where the applicant is located, in order to find the respective principals of the corresponding region by this operator.
And the target data field in the technical metadata can be pulled into the canvas, and the connection relation between the target data field and the attribute information of the corresponding service model is constructed so as to expand the directed graph.
Specifically, service personnel find corresponding data fields in the technical metadata and drag the corresponding data fields into the canvas through the technical metadata, and then associate the attributes in the service model with the data fields in the technical metadata in a connecting line mode to realize the communication of the service metadata and the technical metadata, so that the pull-through of the service metadata map and the technical metadata map is realized.
The front-end interface records detailed information of the service model on the canvas and the association relation between the service model and technical metadata, and comprises the following steps: the service model relates to the entity, the layout of the entity on canvas, the attribute relationship of the entity, the relationship of the attribute and the technical metadata, such as the office A, the office B and the office C related to the consignment, the license F and the certificate S related to the license, each attribute related to the form, the relationship between the entity, such as the relationship between the office A and the related security personnel registry, and the association relationship between the certificate S in the license and the corresponding table in the technical metadata. In some embodiments, the model presentation layer employs the following data storage means:
acquiring model information and layout information which are transmitted to a background by the front end according to a JSON structure;
according to the JSON structure, storing the JSON structure into a model diagram data table of the model display layer to record node information of each first node in the directed graph and the relation among the nodes, wherein the node information comprises node codes, node names, node types, service attributes contained in the nodes, related operators, conditions and expressions, and the relation among the nodes records the preamble node codes of each node.
In some embodiments, the model abstraction layer is configured to convert the directed graph of JSON structure stored by the model presentation layer into a structure of the abstraction layer, the number of nodes of the abstraction layer being determined by the number of operators of the presentation layer according to the first node of the presentation layer corresponding to one or more second nodes of the abstraction layer.
And converting the business model into a node list and node contexts, wherein each node context corresponds to one operator in the model presentation layer node, any node context is composed of node input parameters and output parameters, the input parameters comprise related data tables and data fields, and the output parameters comprise output intermediate tables and data fields.
Specifically, the node context of the model abstract layer decouples the presentation layer and the execution layer, so that the presentation layer does not depend on a technical architecture and a DAG execution framework used by the execution layer, whether the execution layer depends on Spark to execute on a dolphin scheduler has no influence on the presentation layer, operators packaged by the presentation layer cannot depend on the implementation of the execution layer, and the purpose of the design of the presentation layer is to facilitate understanding and operation of service personnel. The model abstraction layer also normalizes the actions of the nodes, i.e. converts them into operations of which operator the two data sets do. The abstraction of the operator includes filter, map, union, join, distinct and aggregation.
The traditional system is usually a Jar packet or Python code segment of a task node binding specific codes aiming at a self-defined node, but cannot be realized aiming at a system in which a service model is built by service personnel and supports dynamic execution, and the related codes of the task node cannot be developed and uploaded by a developer after the service personnel creates a corresponding service model each time so as to obtain an execution layer, so that the system does not meet the requirement of quick response service in a big data background. In some embodiments, the model execution layer is configured to:
creating corresponding task nodes according to the content of the node context of the model abstract layer;
task scheduling is performed using a unified type of task package to perform tasks based on the created task nodes.
Due to the existence of the model abstract layer, the application architecture supports various model execution layers. In this example, a dolphins duplex is used as an example, and a model is executed as an example. The model execution layer in this example uses a Spark task running on a dolphinschel, and a common Spark task usually needs to upload a designated Jar package each time a DAG execution graph is created. The generic Spark task Jar packages how individual operators filter, map, union, join, distinct from the abstraction layer and aggregated execute in the environment of the dolphinschel. For example, the way in which the filter of the abstract layer is parsed into which fields are selected from the Dataset of Spark, what the filtering conditions are; map parsing to specify one or more specified mathematical computational operations; join is a join operation that resolves into two datasets based on connection conditions. The dolphin scheduler acts as a task scheduler at the execution layer, and the actual execution is per task node.
Based on the foregoing description, in this embodiment, only the general Jar packet needs to be uploaded once, then the interface of the model execution layer is called in the model creation stage, according to the JSON structure of the DAG transmitted by the abstraction layer, how many task nodes are resolved, and the task nodes are assembled into the task nodes of the dolphin scheduler, which also include the dependency relationship of each task.
The interface of the dolphin scheduler to generate the DAG is a standard operation, and not described in detail here, the specific structure of the node context described in the scheme is as follows:
the basic information comprises node names, node types and description information, wherein the node types comprise start, stage and end types.
The input parameters, including the input list and the field name, are two, the input parameters of the service node in the presentation layer can be multiple list, the operators can be multiple, the model abstract layer has been converted once, and the multiple operators are converted into single operators.
Output parameters, including tables and fields for output.
Operator, currently support filter, map, union, join, distinct and aggregate.
The most general task executor only keeps the total calculation result after the whole process execution is finished, and the scheme of the application focuses on service twinning. That is, the solution of the present application executes a specific model according to the trigger condition of the model to obtain an execution result. By such a design, which is equivalent to the whole flow running the real production, i.e. the business twinning as in the present example, it is necessary to know not only the final execution result but also the cause of such an influence, i.e. the calculation results of the respective steps are also saved as a basis.
The data twinning means that simulation is realized on the system provided by the scheme before the triggering condition is input into the production system, and in the embodiment, an epidemiological related disease detection point optimization model is further taken as an example for description, and the model reflects the influence of the increase and decrease of the detection points in a certain area on the detection pressure of other detection points.
The detection point optimizing model is based on the available multiple detection points of a certain street to increase and decrease detection points. The probability p of a person in the street area going to each detection point can be calculated from the sample data.
By combining probability data with a distance table of the personnel to each detection point, assuming that the walking distance of the personnel to the detection point is s by using a linear regression mode, obtaining a specific regression expression p=f(s) by performing linear regression through weka, and calculating the probability of all the personnel to all the detection points by using the regression expression.
The implementation mode of the detection point optimization model on the background system of the application is as follows:
a data table relating to the detection points is prepared. The specific steps corresponding to the check point optimization model are that a person goes to each detection point to detect probability table calculation nodes, statistics nodes before the change of the number of the detection points, the estimation of the number of the detection points and the comparison of the change of the number of the detection points. In particular, how to map the model to the floor system in a map updating mode, and describe the model on a canvas in a dragging mode, and the data storage is determined according to operators and expressions appointed on each node.
If a detection point is newly added or reduced, the probability table is recalculated, and then the checkpoint optimization model is manually promoted on the interface to obtain the comparison of the estimated result and the estimated result before adjustment.
In summary, the application integrates the result of the data element, and constructs the main body 360-degree micro database on the basis of the data element, wherein the original data in the application is in the data vault physically isolated by the unidirectional optical gate, the result of the data element is the data element database which is dynamically ferred to the outside of the optical gate according to the authorization protocol after the element processing center in the data vault processes, and the safety and compliance problems of the original data are overcome in a mechanism.
Conventional systems are highly coupled from the business model presentation layer to the page model execution layer, and changes to the model execution layer can result in changes to the overall system. The method adopts three-layer design of a model showing layer, a model abstract layer and a model executing layer, and loosely couples the model showing layer to the executing layer and simultaneously supports various executing engines.
The method of the application has a unified node context structure, supports the unified Jar package to create different task nodes of an execution layer according to parameters, and supports the requirement of quick response service under a big data background according to node context execution.
The embodiment of the application also provides computer equipment, which comprises a processor and a memory, wherein the memory is stored with a computer program, and the computer program realizes the steps of the data processing method based on the twin platform supporting multiple execution engines when being executed by the processor.
The embodiment of the application also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a computer program, and the computer program realizes the steps of the data processing method based on the twin platform supporting multiple execution engines when being executed by a processor.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the protection of the claims, which fall within the protection of the present application.
Claims (8)
1. A data processing method for supporting multiple execution engines based on a twin platform, comprising:
pulling a data element result into a message queue of the twin platform according to the token of each data element through a data element interface of the data element trading platform;
and calling a complex event processing module, triggering an associated business model based on the data element result in the message queue so as to execute corresponding operation based on the business model, and writing the execution result into a data asset library.
2. The data processing method based on the twin platform supporting multiple execution engines according to claim 1, wherein the construction and execution flow of the business model is implemented through a model presentation layer, a model abstract layer and a model execution layer three-layer architecture, wherein the model presentation layer is configured to:
acquiring an associated main body related to a target service corresponding to the data element result, and determining a transacting transaction of the associated main body;
providing canvas, configuring attribute information of a service model required by a target service in a visual mode, and establishing a connection relationship between the service model and an associated main body to form a directed map; and
and pulling target data fields in technical metadata into the canvas, and constructing a connection relationship between the target data fields and attribute information of the corresponding service model so as to expand the directed graph.
3. The data processing method based on the twin platform supporting multiple execution engines according to claim 2, wherein the model presentation layer adopts the following data storage mode:
acquiring model information and layout information which are transmitted to a background by the front end according to a JSON structure;
according to the JSON structure, storing the JSON structure into a model diagram data table of the model display layer to record node information of each first node in the directed graph and the relation among the nodes, wherein the node information comprises node codes, node names, node types, service attributes contained in the nodes, related operators, conditions and expressions, and the relation among the nodes records the preamble node codes of each node.
4. A data processing method based on a twin platform supporting multiple execution engines as claimed in claim 3, wherein the model abstraction layer is configured to convert a directed graph of JSON structure stored by the model presentation layer into a structure of abstraction layer, corresponding to one or more second nodes of abstraction layer according to a first node of presentation layer; and
and converting the service model into a node list and node contexts, wherein each node context corresponds to one operator in the model presentation layer node, and any node context consists of node input parameters and output parameters, wherein the input parameters comprise related data tables and data fields, and the output parameters comprise output intermediate tables and data fields.
5. The data processing method based on the twin platform supporting multiple execution engines according to claim 4, wherein the model execution layer is configured to:
creating corresponding task nodes according to the content of the node context of the model abstract layer;
task scheduling is performed using a unified type of task package to perform tasks based on the created task nodes.
6. The data processing method supporting multiple execution engines based on a twin platform according to claim 5, wherein executing tasks based on the created task nodes comprises executing task nodes corresponding to a business model according to trigger conditions of the business model to obtain execution results.
7. A computer device comprising a processor and a memory, the memory having stored thereon a computer program which, when executed by the processor, implements the steps of a data processing method according to any of claims 1 to 6 based on a twin platform supporting multiple execution engines.
8. A computer-readable storage medium, on which a computer program is stored, which when being executed by a processor implements the steps of a data processing method according to any one of claims 1 to 6, based on a twin platform supporting multiple execution engines.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2022114841990 | 2022-11-25 | ||
CN202211484199 | 2022-11-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116521643A true CN116521643A (en) | 2023-08-01 |
Family
ID=87391014
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310112605.9A Pending CN116521643A (en) | 2022-11-25 | 2023-02-14 | Data processing method and device for supporting multiple execution engines based on twin platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116521643A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116821944A (en) * | 2023-08-31 | 2023-09-29 | 中电安世(成都)科技有限公司 | Data processing method and system based on data element |
-
2023
- 2023-02-14 CN CN202310112605.9A patent/CN116521643A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116821944A (en) * | 2023-08-31 | 2023-09-29 | 中电安世(成都)科技有限公司 | Data processing method and system based on data element |
CN116821944B (en) * | 2023-08-31 | 2023-11-14 | 中电安世(成都)科技有限公司 | Data processing method and system based on data element |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9876684B2 (en) | System architecture for cloud-platform infrastructure layouts | |
US8374987B2 (en) | Stateful, continuous evaluation of rules by a state correlation engine | |
US20170329506A1 (en) | Visual workflow model | |
US20170109657A1 (en) | Machine Learning-Based Model for Identifying Executions of a Business Process | |
CN107924406A (en) | Selection is used for the inquiry performed to real-time stream | |
US11507858B2 (en) | Rapid predictive analysis of very large data sets using the distributed computational graph using configurable arrangement of processing components | |
CN108415832A (en) | Automatic interface testing method, device, equipment and storage medium | |
US9990188B2 (en) | Mechanisms for declarative expression of data types for data storage | |
US20140280142A1 (en) | Data analytics system | |
US10642863B2 (en) | Management of structured, non-structured, and semi-structured data in a multi-tenant environment | |
US20170109668A1 (en) | Model for Linking Between Nonconsecutively Performed Steps in a Business Process | |
US7921075B2 (en) | Generic sequencing service for business integration | |
US20170109667A1 (en) | Automaton-Based Identification of Executions of a Business Process | |
US20120159446A1 (en) | Verification framework for business objects | |
US8924914B2 (en) | Application creation tool toolkit | |
US11487584B2 (en) | Rule generation and tasking resources and attributes to objects system and method | |
EP2492806A1 (en) | Unified interface for meta model checking, modifying, and reporting | |
JP2012059261A (en) | Context based user interface, retrieval, and navigation | |
US9058176B2 (en) | Domain-specific generation of programming interfaces for business objects | |
US10048984B2 (en) | Event-driven multi-tenant computer-management platform | |
US20210350262A1 (en) | Automated decision platform | |
CN116521643A (en) | Data processing method and device for supporting multiple execution engines based on twin platform | |
US9460304B1 (en) | Data services generation | |
Davydova et al. | Mining hybrid UML models from event logs of SOA systems | |
US8706804B2 (en) | Modeled chaining of service calls |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |