CN116757269A - Logic programming prediction model training method, logic programming method and device - Google Patents
Logic programming prediction model training method, logic programming method and device Download PDFInfo
- Publication number
- CN116757269A CN116757269A CN202310715015.5A CN202310715015A CN116757269A CN 116757269 A CN116757269 A CN 116757269A CN 202310715015 A CN202310715015 A CN 202310715015A CN 116757269 A CN116757269 A CN 116757269A
- Authority
- CN
- China
- Prior art keywords
- logic
- vector
- component
- node
- variable
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 99
- 238000000034 method Methods 0.000 title claims abstract description 93
- 239000013598 vector Substances 0.000 claims abstract description 480
- 238000012545 processing Methods 0.000 claims abstract description 61
- 238000004590 computer program Methods 0.000 claims abstract description 27
- 238000013507 mapping Methods 0.000 claims description 33
- 230000015654 memory Effects 0.000 claims description 31
- 230000007246 mechanism Effects 0.000 claims description 24
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 12
- 230000000007 visual effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 239000012141 concentrate Substances 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000011161 development Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000007787 long-term memory Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/34—Graphical or visual programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/42—Syntactic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/43—Checking; Contextual analysis
- G06F8/436—Semantic checking
- G06F8/437—Type checking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Devices For Executing Special Programs (AREA)
Abstract
The application relates to a logic programming prediction model training method, a logic programming device, computer equipment, a storage medium and a computer program product, relates to the technical field of computers, and can be used in the technical field of finance and other related fields. The method comprises the following steps: constructing a grammar tree of sample logic for each sample logic in the low code platform; determining a logic sequence vector of the sample logic based on the syntax tree; sliding window processing is carried out on the logic sequence vectors to obtain at least one logic sequence sub-vector of the sample logic; the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed to be a target component of the logic sequence sub-vector; and training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain the logic programming prediction model after training. By adopting the method, the logic arrangement efficiency in the low-code platform can be improved.
Description
Technical Field
The present application relates to the field of computer technology, and in particular, to a logic arrangement prediction model training method, a logic arrangement device, a computer apparatus, a storage medium, and a computer program product.
Background
The low code is a visualized application development method; logic arrangement is a method for developing service functions under low codes, for example, a developer orderly forms a logic with a certain service function by dragging a visual graphic component, and realizes the corresponding service function by means of the logic.
However, in low-code based logic orchestration, there remains a need to rely on the subjective choice of the developer for the visual graphical components, which tends to make it difficult for non-professional developers to concentrate on the inventive programming activities, thus making the logic orchestration in the low-code platform less efficient.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a logical arrangement prediction model training method, a logical arrangement method, an apparatus, a computer device, a computer-readable storage medium, and a computer program product that can improve the efficiency of logical arrangement in a low-code platform.
In a first aspect, the present application provides a method of logically orchestrating predictive model training. The method comprises the following steps:
constructing a syntax tree of sample logic for each sample logic in a low code platform; each node in the grammar tree corresponds to each variable which forms the sample logic, and the variable is an instantiation object of a component;
determining a logic sequence vector of the sample logic based on the syntax tree; one vector element in the logic sequence vector corresponds to one variable constituting the sample logic;
sliding window processing is carried out on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic;
the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed to be a target component of the logic sequence sub-vector;
and training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain a logic programming prediction model after training.
In one embodiment, the determining the logical sequence vector of the sample logic based on the syntax tree includes:
Determining component type features, component identification features and component nesting features of each node in the syntax tree based on the syntax tree; the component nesting feature is used for representing nesting relation among components;
determining the node type of each node according to the component nesting characteristics;
and according to the node type, carrying out word vector mapping processing on the component type characteristic, the component identification characteristic and the component nesting characteristic of each node in each node to obtain the logic sequence vector of the sample logic.
In one embodiment, the performing word vector mapping processing on the component type feature, the component identification feature and the component nesting feature of each node in the respective nodes according to the node type includes:
under the condition that the node is a first node without nested child nodes, respectively carrying out word vector mapping processing on component type features, component identification features and component nesting features of the first node to obtain component type feature vectors, component identification feature vectors and component nesting feature vectors of the first node;
and under the condition that the node is a second node with nested child nodes, respectively carrying out word vector mapping processing on the component type feature and the component identification feature of the second node to obtain a component type feature vector and a component identification feature vector of the second node, and carrying out word vector mapping processing on the component nesting feature of the second node based on the child nodes nested by the second node to obtain a component nesting feature vector of the second node.
In one embodiment, the training the logic arrangement prediction model to be trained by using the logic sequence sub-vector as input information and using the target component of the logic sequence sub-vector as supervision information to obtain a trained logic arrangement prediction model includes:
performing feature extraction processing on the logic sequence sub-vectors through a long-short-term memory artificial neural network in the logic arrangement prediction model to be trained to obtain target vectors corresponding to the logic sequence sub-vectors;
classifying the target vector through the attention mechanism network in the logic editing prediction model to be trained to obtain a prediction component corresponding to the logic sequence sub-vector;
and training the logic programming prediction model to be trained based on the difference information between the prediction component and the target component to obtain the logic programming prediction model after training.
In one embodiment, the network of attention mechanisms in the logical orchestration prediction model to be trained is obtained by:
obtaining a context feature vector of each variable composing each sample logic in the low code platform;
And obtaining the attention mechanism network in the logic arrangement prediction model to be trained based on the context feature vectors of the variables.
In one embodiment, the obtaining the context feature vector of the respective variables that make up each sample logic in the low code platform includes:
for each variable, determining a current position of the variable in a syntax tree of the corresponding sample logic;
determining variable frequency characteristics and variable distance characteristics of the variable based on the current position; the variable number feature is used to characterize the number of times the variable appears before the current position in the syntax tree; the variable distance feature is used to characterize the distance between the current location and the last location in the syntax tree where the variable occurred;
and obtaining the context feature vector of the variable based on the variable frequency feature and the variable distance feature of the variable.
In a second aspect, the present application also provides a logic orchestration method. The method comprises the following steps:
identifying a node to be predicted in a logic arrangement interface, and determining target logic to which the node to be predicted belongs;
determining a logic sequence sub-vector corresponding to the node to be predicted based on the target logic;
Inputting the logic sequence sub-vector corresponding to the node to be predicted into a logic arrangement prediction model after training is completed, and obtaining at least one prediction component associated with the node to be predicted and the association degree between each prediction component and the node to be predicted; the trained logic programming prediction model is the trained logic programming prediction model in the logic programming prediction model training method;
and displaying the at least one prediction component in the logic programming interface according to the association degree, and programming the target logic based on the at least one prediction component.
In one embodiment, the determining, based on the target logic, a logic sequence sub-vector corresponding to the node to be predicted includes:
constructing a grammar tree of the target logic based on the target logic;
determining a logic sequence vector of the target logic according to the component type characteristic, the component identification characteristic and the component nesting characteristic of each node in the grammar tree of the target logic;
and extracting a sub-vector with a corresponding vector length of a preset length from the logic sequence vector of the target logic by taking a vector element of a previous node of the node to be predicted in the logic sequence vector of the target logic as a last element of the logic sequence sub-vector corresponding to the node to be predicted, wherein the corresponding vector length is the sub-vector with the preset length, and the sub-vector is used as the logic sequence sub-vector corresponding to the node to be predicted.
In a third aspect, the application further provides a logic programming prediction model training device. The device comprises:
a grammar construction module for constructing a grammar tree of sample logic for each sample logic in a low code platform; each node in the grammar tree corresponds to each variable which forms the sample logic, and the variable is an instantiation object of a component;
a sequence determining module, configured to determine a logic sequence vector of the sample logic based on the syntax tree; one vector element in the logic sequence vector corresponds to one variable constituting the sample logic;
the sliding window processing module is used for carrying out sliding window processing on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic;
the component determining module is used for determining the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector as the target component of the logic sequence sub-vector;
and the model training module is used for training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain the logic programming prediction model after training.
In a fourth aspect, the application also provides a logic arrangement device. The device comprises:
the logic determining module is used for identifying a node to be predicted in the logic arrangement interface and determining target logic to which the node to be predicted belongs;
the vector determining module is used for determining a logic sequence sub-vector corresponding to the node to be predicted based on the target logic;
the component prediction module is used for inputting the logic sequence sub-vector corresponding to the node to be predicted into a logic arrangement prediction model after training is completed, so as to obtain at least one prediction component associated with the node to be predicted and the association degree between each prediction component and the node to be predicted; the trained logic programming prediction model is the trained logic programming prediction model in the logic programming prediction model training method;
and the logic arrangement module is used for displaying the at least one prediction component in the logic arrangement interface according to the association degree, and arranging the target logic based on the at least one prediction component.
In a fifth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Constructing a syntax tree of sample logic for each sample logic in a low code platform; each node in the grammar tree corresponds to each variable which forms the sample logic, and the variable is an instantiation object of a component;
determining a logic sequence vector of the sample logic based on the syntax tree; one vector element in the logic sequence vector corresponds to one variable constituting the sample logic;
sliding window processing is carried out on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic;
the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed to be a target component of the logic sequence sub-vector;
and training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain a logic programming prediction model after training.
In a sixth aspect, the present application also provides a computer readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
Constructing a syntax tree of sample logic for each sample logic in a low code platform; each node in the grammar tree corresponds to each variable which forms the sample logic, and the variable is an instantiation object of a component;
determining a logic sequence vector of the sample logic based on the syntax tree; one vector element in the logic sequence vector corresponds to one variable constituting the sample logic;
sliding window processing is carried out on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic;
the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed to be a target component of the logic sequence sub-vector;
and training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain a logic programming prediction model after training.
In a seventh aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Constructing a syntax tree of sample logic for each sample logic in a low code platform; each node in the grammar tree corresponds to each variable which forms the sample logic, and the variable is an instantiation object of a component;
determining a logic sequence vector of the sample logic based on the syntax tree; one vector element in the logic sequence vector corresponds to one variable constituting the sample logic;
sliding window processing is carried out on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic;
the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed to be a target component of the logic sequence sub-vector;
and training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain a logic programming prediction model after training.
The logic arrangement prediction model training, the logic arrangement method, the logic arrangement device, the computer equipment, the storage medium and the computer program product firstly construct a grammar tree of sample logic aiming at each sample logic in a low-code platform; each node in the grammar tree corresponds to each variable which forms sample logic, and the variable is an instantiation object of the component; then determining a logic sequence vector of the sample logic based on the grammar tree; one vector element in the logic sequence vector corresponds to one variable constituting the sample logic; then carrying out sliding window processing on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic; then, the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed to be the target component of the logic sequence sub-vector; and finally, training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain the logic programming prediction model after training. In this way, based on the grammar tree constructed by the sample logic in the low code platform, the logic sequence vector of the sample logic can be obtained, then at least one logic sequence sub-vector of the sample logic can be obtained through sliding window processing of the logic sequence vector, so that the logic sequence sub-vector can be used as input information, and the target component of the logic sub-sequence vector is used as supervision information, the logic arrangement prediction model to be trained is supervised and trained, and further the logic arrangement prediction model which can be used for carrying out prediction on the next component needing to be inserted with logic according to the preamble information in the logic being arranged is obtained. The logic arrangement prediction model training method based on the process can liberate a developer from the complicated visual graphic assembly selection, so that the developer can concentrate energy on creative programming activities better, and the logic arrangement efficiency in a low-code platform is improved.
Drawings
FIG. 1 is a flow diagram of a method of training a logically orchestrated predictive model in one embodiment;
FIG. 2 is a schematic diagram of a syntax tree of sample logic in one embodiment;
FIG. 3 is a flow chart illustrating steps for determining a logical sequence vector of sample logic based on a syntax tree in one embodiment;
FIG. 4 is a flowchart illustrating steps for training a logically organized predictive model to be trained to obtain a trained logically organized predictive model, in one embodiment;
FIG. 5 is a schematic diagram of a logical orchestration prediction model to be trained in one embodiment;
FIG. 6 is a flow chart of a method of training a logically orchestrated predictive model according to another embodiment;
FIG. 7 is a flow diagram of a logic orchestration method according to one embodiment;
FIG. 8 is a schematic diagram of a low-code logic orchestration intelligent assistance method based on visual language in one embodiment;
FIG. 9 is a block diagram of a logic orchestration prediction model training device according to one embodiment;
FIG. 10 is a block diagram of the logic orchestration device according to one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
It should be further noted that the logic programming prediction model training method, the logic programming method, the device, the computer equipment, the storage medium and the computer program product provided by the application can be applied to the field of financial science and technology, such as assisting a developer of a bank in developing and applying a business system of the bank; the method and the device can be applied to other related fields, such as the field of computer technology, and can be used for realizing intelligent component recommendation in a low-code development process for non-professional developers.
In an exemplary embodiment, as shown in fig. 1, a logic orchestration prediction model training method is provided, and this embodiment is illustrated by applying the method to a server; it will be appreciated that the method may also be applied to a terminal, and may also be applied to a system comprising a server and a terminal, and implemented by interaction between the server and the terminal. The server can be realized by an independent server or a server cluster formed by a plurality of servers; the terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, and tablet computers. In this embodiment, the method includes the steps of:
Step S102, for each sample logic in the low code platform, constructing a grammar tree of the sample logic.
In the field of code development, logic refers to a set of processes capable of realizing corresponding functions; in a low code platform, a logic is ultimately converted to a function in the code to implement the corresponding function.
Wherein in the low code platform, a logic is composed of a plurality of variables, each variable being an instantiation object of a component; a component is an abstraction of multiple variables that have common characteristics, and the relationship between a component and a variable can be understood as a relationship between a class and an object in an object-oriented language.
It can be appreciated that in a low code platform for visual application development, a developer can implement the orchestration of logic by dragging visual graphical components.
The sample logic refers to standard logic provided by a low-code platform and can also comprise custom logic created by a developer and historical logic created by the developer;
wherein each node in the syntax tree corresponds to each variable that makes up the sample logic.
Specifically, for each sample logic in the low code platform, the server constructs a syntax tree of the sample logic according to component characteristics corresponding to each variable constituting the sample logic and nesting relations among components to which each variable belongs.
By way of example, the low code platform orchestration logic provides the developer with a total of 30 components of 6 general categories including logic components, arithmetic components, comparison operation components, etc., each described by component type features, component identification features, and component nesting features; the component type feature is used for representing the type of the component in 6 major categories, the component identification feature is used for representing the identification of the component in more than 30 components, and the component nesting feature is used for representing the nesting relationship among the components. FIG. 2 is a schematic diagram of a syntax tree of sample logic, wherein a component corresponding to a first variable is an assignment component in a logic component, two variables are nested, the components corresponding to the two nested variables are a variable component in a system component and a data query component in the logic component, the variables corresponding to the variable component are not nested with other variables, and the variables corresponding to the data query component are further nested with other variables; it can be appreciated that fig. 2 illustrates only a portion of the syntax tree of the sample logic.
It should be noted that, for the component nesting feature, "null" indicates that there are no nested child nodes for the node, and "child module" indicates that there are nested child nodes for the node.
Step S104, determining logic sequence vectors of sample logic based on the grammar tree.
Wherein one vector element in the logic sequence vector corresponds to one variable that constitutes the sample logic, i.e., one node in the syntax tree.
Specifically, the server converts word expressions of all nodes in the grammar tree of the sample logic into vector expressions of the nodes through a word vector model to obtain word vectors of all the nodes, and then obtains logic sequence vectors of the sample logic based on the word vectors of all the nodes.
For example, referring to fig. 2, the server converts the component type feature "logic component", the component identification feature "assignment component" and the component nesting feature "submodule" of the first node into a vector mode through a word2vec model (a model for converting words into vector form) to obtain a word vector X of the node 1 ={X 11 ,X 12 ,X 13 Similarly, the server may also obtain a word vector X for a second node (the node corresponding to the variable component) 2 ={X 21 ,X 22 ,X 23 A word vector X of a third node (corresponding to the node of the data query component) 3 ={X 31 ,X 32 ,X 33 -a }; then, the server combination X 1 、X 2 、X 3 Obtaining a logic sequence vector { X } of the sample logic 1 ,X 2 ,X 3 }。
Step S106, sliding window processing is carried out on the logic sequence vectors to obtain at least one logic sequence sub-vector of the sample logic.
The sliding window processing refers to the interception operation of the logic sequence vector with a preset sliding window step length and a preset sliding window width.
Specifically, the server performs sliding window processing on the logic sequence vector of the sample logic according to a preset sliding window step length and a preset sliding window width to obtain at least one logic sequence sub-vector of the sample logic; under the condition that the length of the logic sequence vector is smaller than the preset sliding window width, the server carries out zero padding on the first bit of the logic sequence vector to obtain a logic sequence sub-vector of the sample logic.
For example, assume that the logic sequence vector of the sample logic obtained by the server is { X ] 1 ,X 2 ,X 3 ,X 4 ,X 5 ,X 6 ,X 7 ,X 8 The server uses 1 as the preset sliding window step length and 3 as the preset sliding window width, then the logic sequence sub-vectors obtained by the server are { X }, respectively 1 ,X 2 ,X 3 }、{X 2 ,X 3 ,X 4 }、{X 3 ,X 4 ,X 5 }、{X 4 ,X 5 ,X 6 }、{X 5 ,X 6 ,X 7 { X } 6 ,X 7 ,X 8 }. If the logic sequence vector is { X } 1 ,X 2 Then the server takes {0, X } 1 ,X 2 Logic sequence sub-vectors that are logic sequence vectors.
Step S108, the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed as the target component of the logic sequence sub-vector.
Specifically, for each logic sequence sub-vector, the server determines a component corresponding to a next vector element in the logic sequence sub-vector according to the component identification feature of the next vector element in the logic sequence sub-vector, and confirms the component corresponding to the next vector element as a target component of the logic sequence sub-vector.
For example, the sub-vector { X in the logical sequence 2 ,X 3 ,X 4 For example, in the logical sequence vectors, the logical sequence sub-vectors { X } 2 ,X 3 ,X 4 The next vector element of } is X 5 Thus, the server is based on vector element X 5 Corresponding word vector, determining vector element X 5 Component identification features of corresponding variables so as to obtain specific components to which the variables belong, and determining the specific components as logical sequence sub-vectors { X } 2 ,X 3 ,X 4 A target component. For example, vector element X 5 The component identification feature of the corresponding variable is "logical component", then the server will logically sequence sub-vector { X } 2 ,X 3 ,X 4 The target component of } is determined to be a logical component.
Step S110, training the logic arrangement prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain the trained logic arrangement prediction model.
The logic programming prediction model to be trained is a long-term and short-term memory artificial neural network model with an attention mechanism network.
Specifically, the server takes the logic sequence sub-vector as input information, takes the target component of the logic sequence sub-vector as supervision information, obtains the prediction component of the logic sequence sub-vector through the logic arrangement prediction model to be trained, and then carries out supervision training on the logic arrangement prediction model to be trained according to the difference information between the prediction component and the target component to obtain the logic arrangement prediction model which can be used for carrying out prediction on the component which needs to be added into the current logic by the developer and is completed by training.
In the logic programming prediction model training method provided in the above embodiment, the server first builds a syntax tree of sample logic for each sample logic in the low-code platform; then determining a logic sequence vector of the sample logic based on the grammar tree; then carrying out sliding window processing on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic; then, the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed to be the target component of the logic sequence sub-vector; and finally, training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain the logic programming prediction model after training. In this way, the server can obtain the logic sequence vector of the sample logic based on the grammar tree constructed by the sample logic in the low-code platform, and further obtain at least one logic sequence sub-vector of the sample logic, so that the logic sequence sub-vector can be used as input information, and the target component of the logic sub-sequence vector is used as supervision information, and the logic arrangement prediction model to be trained is supervised and trained, so that the logic arrangement prediction model which can be used for predicting the next component needing to be inserted with logic according to the previous information in the logic being arranged is obtained. The logic arrangement prediction model training method based on the process can liberate a developer from the complicated visual graphic assembly selection, so that the developer can concentrate energy on creative programming activities better, and the logic arrangement efficiency in a low-code platform is improved.
As shown in fig. 3, in an exemplary embodiment, the step S104 determines a logic sequence vector of the sample logic based on the syntax tree, and specifically includes the following steps:
step S302, based on the grammar tree, determining component type features, component identification features and component nesting features of each node in the grammar tree.
Step S304, determining the node type of each node according to the component nesting characteristics.
Step S306, according to the node type, carrying out word vector mapping processing on the component type feature, the component identification feature and the component nesting feature of each node in each node to obtain a logic sequence vector of the sample logic.
Wherein component nesting features are used to characterize nesting relationships between individual components.
The node type divides the nodes according to whether the nodes have nested child nodes.
Specifically, the server firstly determines component type characteristics, component identification characteristics and component nesting characteristics of each node in a grammar tree based on the grammar tree; then the server divides the nodes into a first node without nested child nodes and a second child node with nested child nodes according to the component nesting characteristics of each node; then, the server respectively carries out word vector mapping processing on the component type feature, the component identification feature and the component nesting feature of the first node, and the component type feature, the component identification feature and the component nesting feature of the second node, so that word vectors of all the nodes are obtained, and further logic sequence vectors of the logic sequence are obtained.
For example, referring to FIG. 2, the component nesting feature of the node corresponding to the variable component is null, so that the node is the first node; the nodes corresponding to the assignment component and the data query component each have nested child nodes, and therefore both nodes are second nodes.
In this embodiment, the server divides the nodes into a first node with no nested child nodes and a second child node with nested child nodes through the nested relationship, so that word mapping processing can be performed on the nodes according to the types of the nodes, and logic sequence vectors capable of accurately reflecting the overall structure and the internal nested relationship of the logic are obtained, a material basis is provided for training of a subsequent logic arrangement prediction model, and further logic arrangement efficiency in a low-code platform is improved.
In an exemplary embodiment, the step S306 performs word vector mapping processing on the component type feature, the component identification feature and the component nesting feature of each node according to the node type, and specifically includes the following steps: under the condition that the node is a first node without nested child nodes, word vector mapping processing is respectively carried out on the component type feature, the component identification feature and the component nesting feature of the first node, so that component type feature vectors, component identification feature vectors and component nesting feature vectors of the first node are obtained; and under the condition that the node is a second node with nested child nodes, respectively carrying out word vector mapping processing on the component type feature and the component identification feature of the second node to obtain a component type feature vector and a component identification feature vector of the second node, and carrying out word vector mapping processing on the component nesting feature of the second node based on the child nodes nested by the second node to obtain a component nesting feature vector of the second node.
Specifically, in the case that the node is the first node without nested child nodes, the server respectively determines the component class of the first nodeAnd then, the server combines the component type feature vector, the component identification feature vector and the component nesting feature vector to obtain the word vector of the first node. For example, referring to the node corresponding to the variable component in fig. 2, the node is the first node, so the server obtains the component type feature vector of the node through the word vector mapping process for the component type feature "system component", the component identification feature "variable component" and the component nesting feature "null" of the node respectivelyComponent identification feature vector +.>Component nesting feature vector>The server obtains the word vector X of the node according to the component type feature vector, the component identification feature vector and the component nesting feature vector 2 ={X 21 ,X 22 ,X 23 }. It can be appreciated that since the first node does not have nested children, the component nesting feature vector of the first node can be used with X 23 = {0,0 … … 0 }. Where k represents the vector length of the feature vector.
Under the condition that the node is a second node with nested child nodes, the server firstly respectively carries out word vector mapping processing on the component type feature and the component identification feature of the second node to obtain a component type feature vector and a component identification feature vector of the second node, then carries out word vector mapping processing on the component nesting feature of the second node based on the word vector of the child node nested by the second node to obtain a component nesting feature vector of the second node, and then the server carries out word vector mapping processing on the component nesting feature vector of the second node according to the component type feature vector and the component identificationAnd the feature vector and the component nest feature vector to obtain the word vector of the second node. For example, referring to the node corresponding to the assigned component in fig. 2, the node is the second node, so the server first obtains the component type feature vector of the node through word vector mapping processing for the component type feature "logic component" and the component identification feature "assigned component" of the node, respectivelyComponent identification feature vector +.>Then, the server convolves the word vector of the child node (the node corresponding to the variable component and the node corresponding to the data query component) of the node through the formula 1, so as to realize word vector mapping processing of the component nesting feature of the node and obtain the component nesting feature vector +. >
Where W is the convolved weight vector. Finally, the server obtains the word vector X of the node according to the component type feature vector, the component identification feature vector and the component nesting feature vector 1 ={X 11 ,X 12 ,X 13 }。
In this embodiment, the server can obtain the logic sequence vector fully reflecting the overall structure and the internal nested relation of the logic through word vector mapping processing of the first node and the second node, so that subsequent training of the logic arrangement prediction model is facilitated, and further the logic arrangement efficiency in the low-code platform is improved.
As shown in fig. 4, in an exemplary embodiment, the step S110 is to train the logic arrangement prediction model to be trained by using the logic sequence sub-vector as input information and using the target component of the logic sequence sub-vector as supervision information, and the training process specifically includes the following steps:
and step S402, performing feature extraction processing on the logic sequence sub-vectors through a long-short-term memory artificial neural network in the logic arrangement prediction model to be trained to obtain target vectors corresponding to the logic sequence sub-vectors.
Step S404, classifying the target vector through the attention mechanism network in the logic editing prediction model to be trained to obtain a prediction component corresponding to the logic sequence sub-vector.
Step S406, training the logic layout prediction model to be trained based on the difference information between the prediction component and the target component, and obtaining the logic layout prediction model after training.
Specifically, as shown in fig. 5, which is a schematic structural diagram of a logic arrangement prediction model to be trained, a server performs feature extraction processing on a logic sequence sub-vector through a long-short-term memory artificial neural network in the logic arrangement prediction model to be trained to obtain a target vector corresponding to the logic sequence sub-vector; then, classifying the target vector based on more than 30 components provided by the low-code platform through a attention mechanism network in the logic editing prediction model to be trained to obtain the association degree between the target vector and the more than 30 components, and then, confirming the corresponding component with the highest association degree as the target component of the logic sequence sub-vector by the server; and finally, calculating a loss value between the prediction component and the target component according to the loss function by the server, updating training parameters of the logic programming prediction model to be trained under the condition that the loss value is larger than a preset loss threshold value, and retraining the logic programming prediction model to be trained until the corresponding loss value is smaller than the preset loss threshold value, so as to obtain the logic programming prediction model after training.
It can be understood that the vector dimension of each feature vector of each node is 1×k, so the vector dimension of the word vector of each node is 3×k, and further the vector dimension of the logic sequence vector of each sample logic is 3×k×m, where m is the number of variables constituting the sample logic, i.e., the vector length of the logic sequence vector, and assuming that the preset sliding window width is n, the vector dimension of each logic sequence sub-vector is 3×k×n. Referring to fig. 5, taking a training process of the ith logic sequence sub-vector as an example for details, the server inputs the first i logic sequence sub-vectors with vector dimensions of 3×k×n into a logic arrangement prediction model to be trained, obtains a target vector with vector dimensions of 3×k (the target vector corresponding to the ith logic sequence sub-vector) through a long-short-term memory artificial neural network, then obtains a vector with vector dimensions of 3n×k through an attention mechanism network, and finally obtains a relevance vector with vector dimensions of c×1 through an activation function (softmax function), wherein c is the number of components provided by the low code platform, each element in the relevance vector represents the relevance between each component provided by the logic sequence sub-vector and the low code platform, and the relevance is used for representing the possibility that the component corresponding to the next node of the logic sequence sub-vector belongs to each component provided by the low code platform.
In this embodiment, the server monitors and trains the logic sequence sub-vector and the corresponding target component to be trained to obtain the logic editing prediction model with prediction accuracy meeting the requirement, based on the model, accurate prediction of the component to be added can be realized based on the overall arrangement condition of the current logic, so that a developer can be liberated from complicated visual graphic component selection, the developer can concentrate energy in creative programming activities better, and the logic arrangement efficiency in the low-code platform is improved.
In an exemplary embodiment, the above-described network of attention mechanisms in the logical orchestration prediction model to be trained is obtained by: obtaining a context feature vector of each variable composing each sample logic in a low-code platform; based on the context feature vectors of the respective variables, a network of attention mechanisms in the logically orchestrated predictive model to be trained is obtained.
The context feature vector of the variable is used for representing the distribution condition of the variable in the belonging sample logic, such as the occurrence times, the occurrence frequency and the like in the belonging sample logic.
Specifically, the server carries out SBT (splay balanced tree, splay balance tree) structured traversal on the grammar tree aiming at the grammar tree of each sample logic to obtain the context feature vector of each variable in the sample logic; the server then determines weights in the attention mechanism network in the logical orchestration prediction model to be trained based on the contextual feature vectors of the respective variables.
For example, the server first determines variable category characteristics, variable type characteristics, variable times characteristics, and variable distance characteristics of the variables on each node in the syntax tree according to the SBT (splay balanced tree, splay balance tree) structured traversal result of the syntax tree; the variable class features are used for representing the classes of the variables, and the classes of the variables are divided into input variables, output variables and local variables; the variable type features are used for characterizing the data types of the variables, such as integer data, floating point data, character strings and the like; the variable number feature is used for representing the number of times that the variable at the current position appears before the current position in the grammar tree; the variable distance feature is used to characterize the distance between a variable at the current location and the last location in the syntax tree where the variable occurred. Then, the server respectively carries out word vector mapping processing on the variable category characteristics, the variable type characteristics, the variable frequency characteristics and the variable distance characteristics of the variable aiming at the variable on each node to obtain the context characteristic vector of the variable. For example, the variable class of the variable at the jth node is characterized by The variable type is characterized by->The variable number is characterized by->The variable distance is characterized by->Thus, the contextual eigenvector of the variable at the jth node is Y j ={Y j1 ,Y j2 ,Y j3 ,Y j4 }. Next, the server determines weights in the attention mechanism network in the logical orchestration prediction model to be trained according to equation 2:
wherein,,the kth element, w, of the jth feature vector of the jth node jlk The weight in the jth node's jth feature vector, w, is the kth element of the jth node's jth feature vector j ′ lk The weight of the kth element of the jth node's ith feature vector in the attention mechanism network; l=1, 2,3,4, is the sequence number of the feature vector (variable class feature, variable type feature, variable number feature, and variable distance feature).
In this embodiment, the server determines the context characteristics of each variable in the sample logic by traversing the SBT structure of the syntax tree of each sample logic, thereby determining the weight in the attention mechanism network in the logic arrangement prediction model to be trained, further ensuring the prediction accuracy of the logic arrangement prediction model in the training process, and further improving the logic arrangement efficiency in the low-code platform.
In an exemplary embodiment, the above steps obtain the context feature vectors of the variables that make up each sample logic in the low code platform, specifically including the following: for each variable, determining a current position of the variable in a syntax tree of the corresponding sample logic; determining variable frequency characteristics and variable distance characteristics of the variable based on the current position; and obtaining the context feature vector of the variable based on the variable frequency feature and the variable distance feature of the variable.
Wherein the variable number feature is used to characterize the number of times a variable occurs before the current position in the syntax tree; the variable distance feature is used to characterize the distance between the current location and the last location in the syntax tree where the variable occurred.
Specifically, for each variable in the sample logic, the server determines a current position of the variable in a syntax tree of the corresponding sample logic, then determines, based on the current position, a number of times the variable occurs before the current position in the syntax tree as a variable number feature of the variable, and determines a distance between the current position and a last position at which the variable occurs as a variable distance feature of the variable; and finally, the server respectively carries out word vector mapping processing on the variable category characteristics, the variable type characteristics, the variable frequency characteristics and the variable distance characteristics of the variable to obtain a context characteristic vector of the variable.
In this embodiment, the server can determine the distribution condition of the variable on each node in the syntax tree through the SBT structured traversal of the syntax tree, so as to obtain the context feature vector of the variable in the belonging sample logic, which provides a basis for subsequently determining the attention mechanism network in the logic arrangement prediction model to be trained, further ensures the prediction precision of the logic arrangement prediction model in the training process, and further improves the logic arrangement efficiency in the low-code platform.
In an exemplary embodiment, as shown in fig. 6, another logic orchestration prediction model training method is provided, and the method is applied to a server for illustration, and includes the following steps:
step S601, for each sample logic in the low code platform, constructs a syntax tree of the sample logic.
Step S602, determining component type features, component identification features and component nesting features of each node in the syntax tree based on the syntax tree.
Step S603, determining the node type of each node according to the component nesting feature.
Step S604, according to the node type, carrying out word vector mapping processing on the component type feature, the component identification feature and the component nesting feature of each node in each node to obtain a logic sequence vector of the sample logic.
Step S605 performs sliding window processing on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic.
In step S606, the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is identified as the target component of the logic sequence sub-vector.
Step S607, performing feature extraction processing on the logic sequence sub-vectors through the long-short-term memory artificial neural network in the logic arrangement prediction model to be trained, so as to obtain target vectors corresponding to the logic sequence sub-vectors.
Step S608, classifying the target vector through the attention mechanism network in the logic editing prediction model to be trained, and obtaining a prediction component corresponding to the logic sequence sub-vector.
Step S609, training the logic programming prediction model to be trained based on the difference information between the prediction component and the target component to obtain the logic programming prediction model after training.
Before step S607, further includes:
step S610, for each variable in each sample logic in the low code platform, determines the current position of the variable in the syntax tree of the corresponding sample logic.
Step S611, based on the current position, a variable number feature and a variable distance feature of the variable are determined.
Step S612, obtaining the context feature vector of the variable based on the variable frequency feature and the variable distance feature of the variable.
Step S613, obtaining the attention mechanism network in the logic arrangement prediction model to be trained based on the context feature vectors of the variables.
In the embodiment, first, the server divides the nodes into a first node without nested child nodes and a second child node with nested child nodes through nested relations, so that word mapping processing can be carried out on the nodes according to the types of the nodes to obtain logic sequence vectors capable of accurately reflecting the overall structure and the internal nested relations of the logic, a material basis is provided for training of a subsequent logic programming prediction model, and subsequent training of the logic programming prediction model is facilitated; secondly, the server monitors and trains the logic editing prediction model to be trained through the logic sequence sub-vectors and the corresponding target components, can train to obtain the logic editing prediction model with the prediction precision meeting the requirement, and can realize accurate prediction of the components to be added based on the overall arrangement condition of the current logic based on the model; thirdly, through the structured traversal of the grammar tree of each sample logic, the context characteristics of each variable in the sample logic are determined, so that the weight in the attention mechanism network in the logic programming prediction model to be trained is determined, and the prediction precision of the logic programming prediction model is ensured in the training process. The logic arrangement prediction model training method based on the process can liberate a developer from the complicated visual graphic assembly selection, so that the developer can concentrate energy on creative programming activities better, and the logic arrangement efficiency in a low-code platform is improved.
In an exemplary embodiment, as shown in fig. 7, the present application further provides a logic arrangement method, where the method is applied to a server for illustration; it will be appreciated that the method may also be applied to a terminal, and may also be applied to a system comprising a server and a terminal, and implemented by interaction between the server and the terminal. The server can be realized by an independent server or a server cluster formed by a plurality of servers; the terminal may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and the like. In this embodiment, the method includes the steps of:
step S702, identifying a node to be predicted in the logic arrangement interface, and determining target logic to which the node to be predicted belongs.
Step S704, determining a logic sequence sub-vector corresponding to the node to be predicted based on the target logic.
Step S706, inputting the logic sequence sub-vector corresponding to the node to be predicted into the trained logic arrangement prediction model to obtain at least one prediction component associated with the node to be predicted and the association degree between each prediction component and the node to be predicted.
In step S708, at least one prediction component is presented in the logic orchestration interface according to the degree of association, and the target logic is orchestrated based on the at least one prediction component.
The trained logical arrangement prediction model is obtained by training according to the logical arrangement prediction model training method in any one of the embodiments.
The association degree is used for representing the probability that the component to be inserted by the node to be predicted belongs to the prediction component.
Specifically, the server firstly identifies nodes to be inserted into the visual graphic assembly by a developer from a logic arrangement interface and takes the nodes as nodes to be predicted; then, the server determines the logic of the node to be predicted as the target logic of the node to be predicted; then, the server determines a grammar tree of the target logic based on the target logic, obtains logic sequence vectors of the target logic based on the grammar tree, and then acquires a preset number of vector elements in front of the nodes to be predicted as logic sequence sub-vectors corresponding to the nodes to be predicted; then, inputting the logic sequence sub-vector corresponding to the node to be predicted into a trained logic arrangement prediction model by the server, obtaining the possibility that the component to be inserted into the node to be predicted is each component through the trained logic arrangement prediction model, namely, the association degree between the node to be predicted and each component, and screening out the components with the corresponding association degree meeting the preset association degree threshold value by the server to serve as prediction components; and finally, the server displays at least one prediction component in the logic arrangement interface according to the sequence of the association degree from large to small, and the developer arranges the target logic based on the displayed prediction component.
It should be noted that, regarding the specific limitation of the logic arrangement method, reference may be made to the specific limitation of the logic arrangement prediction model training method, which is not described herein.
In the logic arrangement method provided by the embodiment, the server obtains the logic sequence sub-vector corresponding to the node to be predicted based on the target logic to which the node to be predicted belongs, and further predicts the probability that the component to be predicted needs to be inserted into by the trained logic arrangement prediction model to be each component, thereby obtaining the prediction component associated with the node to be predicted, and further realizing intelligent recommendation for the logic arrangement of the developer; the logic arrangement method based on the process can liberate the developer from the complicated visual graphic assembly selection, and assist the developer to concentrate energy on creative programming activities better, so that the logic arrangement efficiency in a low-code platform is improved.
In an exemplary embodiment, step S704 determines, based on the target logic, a logic sequence sub-vector corresponding to the node to be predicted, which specifically includes the following: constructing a grammar tree of the target logic based on the target logic; determining a logic sequence vector of the target logic according to the component type characteristics, the component identification characteristics and the component nesting characteristics of each node in the grammar tree of the target logic; and extracting a sub-vector with the corresponding vector length of a preset length from the logic sequence vector of the target logic by taking a vector element of a previous node of the node to be predicted in the logic sequence vector of the target logic as a last element of the logic sequence sub-vector corresponding to the node to be predicted, and taking the sub-vector with the corresponding vector length of the preset length as the logic sequence sub-vector corresponding to the node to be predicted.
Specifically, the server firstly builds a grammar tree of target logic according to the target logic; then, the server performs word vector mapping processing on the component type features, the component identification features and the component nesting features of each node in the grammar tree to obtain a logic sequence vector of the target logic; then, the server extracts a sub-vector with the corresponding vector length of a preset length from the logic sequence vector of the target logic by taking a vector element of a logic sequence vector of a previous node of the node to be predicted as a last element of the logic sequence sub-vector corresponding to the node to be predicted, and the sub-vector is used as the logic sequence sub-vector corresponding to the node to be predicted.
For example, assume that the logic sequence vector of the target logic is { Z 1 ,Z 2 ,Z 3 ,Z 4 ,Z 5 ,Z 6 ,Z 7 ,Z 8 Node to be predicted is X 5 And X is 6 The predetermined length of the logic sequence sub-vector is three vector elements, and then the vector element of the previous node of the predicted node in the logic sequence vector of the target logic is X 5 Further, the server can obtain that the logic sequence sub-vector corresponding to the node to be predicted is { Z } 3 ,Z 4 ,Z 5 }。
In this embodiment, the server can accurately extract the logic sequence sub-vector corresponding to the node to be predicted from the logic sequence vector of the target logic according to the vector element of the previous node of the node to be predicted in the logic sequence vector of the target logic and the preset length of the logic sequence sub-vector, thereby providing a prediction basis for the prediction component corresponding to the node to be predicted in the subsequent prediction, further improving the accuracy of component prediction, and further improving the efficiency of logic arrangement in the low-code platform.
In order to more clearly illustrate the logic orchestration prediction model training method provided by the embodiments of the present application, a specific embodiment is described below specifically, but it should be understood that the embodiments of the present application are not limited thereto. In an exemplary embodiment, as shown in fig. 8, the present application further provides a low-code logic orchestration intelligent auxiliary method based on a visual language, which specifically includes the following steps:
step 1: and (5) extracting characteristics.
The method comprises the steps that firstly, a grammar tree of sample logic is constructed based on sample logic in a low-code platform, and the grammar tree is converted into vector expression by utilizing a word vector model to obtain a logic sequence vector of the sample logic. Then, the server performs SBT structured traversal based on the syntax tree to obtain a context feature vector of the variable represented by each node.
Step 2: and (5) model design.
And the server builds an attention mechanism network through the context feature vector of each variable in the sample logic, and adds the attention mechanism network into the long-short-term memory artificial neural network model to obtain a logic arrangement prediction model to be trained.
Step 3: and (5) model training.
The method comprises the steps that firstly, a server carries out sliding window processing on a logic sequence vector of sample logic to obtain a plurality of logic sequence sub-vectors with the same length, and confirms a component corresponding to the next vector element of each logic sequence sub-vector in each corresponding logic sequence vector as a target component of the logic sequence sub-vector; and training the logic arrangement prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component as supervision information to obtain the logic arrangement prediction model after training.
Step 4: and (5) intelligent recommendation.
Based on the feature extraction method in the step 1, extracting a logic sequence sub-vector corresponding to a node to be predicted in a logic arrangement interface, inputting the logic sequence sub-vector corresponding to the node to be predicted into a trained logic arrangement prediction model to obtain a prediction component to be inserted into the node to be predicted, and displaying the prediction component in the logic arrangement interface to realize intelligent assistance based on low-code logic arrangement.
In this embodiment, the server may obtain, through feature extraction, a sequence feature of the sample logic and a context feature of each variable in the sample logic, so as to implement training of the logic orchestration prediction model to be trained; and through the logic arrangement prediction model which is completed through training, the visualized logic semantic features and logic context information can be fully utilized when the logic arrangement is carried out on the low-code platform, intelligent recommendation is carried out at the insertion point, and development efficiency of logic arrangement is improved by a developer.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a logic programming prediction model training device for realizing the above related logic programming prediction model training method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the logic programming prediction model training device or devices provided below may be referred to the limitation of the logic programming prediction model training method hereinabove, and will not be repeated herein.
In an exemplary embodiment, as shown in fig. 9, there is provided a logical orchestration prediction model training apparatus, comprising: a grammar construction module 902, a sequence determination module 904, a sliding window processing module 906, a component determination module 908, and a model training module 910, wherein:
a grammar construction module 902 for constructing a grammar tree of sample logic for each sample logic in the low code platform; each node in the syntax tree corresponds to each variable that makes up the sample logic, which is the instantiation object of the component.
A sequence determining module 904, configured to determine a logic sequence vector of the sample logic based on the syntax tree; one vector element in the logic sequence vector corresponds to one variable that makes up the sample logic.
And a sliding window processing module 906, configured to perform sliding window processing on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic.
The component determining module 908 is configured to identify a component corresponding to a next vector element of the logic sequence sub-vector in the logic sequence vector as a target component of the logic sequence sub-vector.
The model training module 910 is configured to train the logic arrangement prediction model to be trained by using the logic sequence sub-vector as input information and using the target component of the logic sequence sub-vector as supervision information, so as to obtain a trained logic arrangement prediction model.
In an exemplary embodiment, the sequence determining module 904 is further configured to determine component type features, component identification features, and component nesting features of each node in the syntax tree based on the syntax tree; the component nesting feature is used for representing nesting relation among components; determining the node type of each node according to the component nesting characteristics; and according to the node type, carrying out word vector mapping processing on the component type characteristics, the component identification characteristics and the component nesting characteristics of each node in each node to obtain a logic sequence vector of the sample logic.
In an exemplary embodiment, the sequence determining module 904 is further configured to, when the node is a first node without a nested child node, perform word vector mapping on the component type feature, the component identification feature, and the component nesting feature of the first node, to obtain a component type feature vector, a component identification feature vector, and a component nesting feature vector of the first node; and under the condition that the node is a second node with nested child nodes, respectively carrying out word vector mapping processing on the component type feature and the component identification feature of the second node to obtain a component type feature vector and a component identification feature vector of the second node, and carrying out word vector mapping processing on the component nesting feature of the second node based on the child nodes nested by the second node to obtain a component nesting feature vector of the second node.
In an exemplary embodiment, the model training module 910 is further configured to perform feature extraction processing on the logic sequence sub-vectors through a long-term and short-term memory artificial neural network in the logic arrangement prediction model to be trained, so as to obtain target vectors corresponding to the logic sequence sub-vectors; classifying the target vector through a attention mechanism network in a logic editing prediction model to be trained to obtain a prediction component corresponding to the logic sequence sub-vector; based on the difference information between the prediction component and the target component, training the logic programming prediction model to be trained to obtain a logic programming prediction model after training.
In an exemplary embodiment, the logic orchestration prediction model training device further comprises a network determination module for obtaining context feature vectors for respective variables in the low-code platform that make up each sample logic; based on the context feature vectors of the respective variables, a network of attention mechanisms in the logically orchestrated predictive model to be trained is obtained.
In an exemplary embodiment, the network determining module is further configured to determine, for each variable, a current location of the variable in the syntax tree of the corresponding sample logic; determining variable frequency characteristics and variable distance characteristics of the variable based on the current position; the variable number feature is used for representing the number of times that the variable appears before the current position in the grammar tree; the variable distance feature is used for representing the distance between the current position and the last position where the variable appears in the grammar tree; and obtaining the context feature vector of the variable based on the variable frequency feature and the variable distance feature of the variable.
Based on the same inventive concept, the embodiment of the application also provides a logic arranging device for realizing the above-mentioned logic arranging method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the logic arrangement device provided below may refer to the limitation of the logic arrangement method hereinabove, and will not be repeated herein.
In an exemplary embodiment, as shown in fig. 10, there is provided a logic orchestration apparatus comprising: logic determination module 1002, vector determination module 1004, component prediction module 1006, and logic orchestration module 1008, wherein:
the logic determination module 1002 is configured to identify a node to be predicted in the logic arrangement interface, and determine target logic to which the node to be predicted belongs.
The vector determining module 1004 is configured to determine a logic sequence sub-vector corresponding to the node to be predicted based on the target logic.
The component prediction module 1006 is configured to input a logic sequence sub-vector corresponding to a node to be predicted into a trained logic arrangement prediction model, so as to obtain at least one prediction component associated with the node to be predicted, and a degree of association between each prediction component and the node to be predicted; the trained logically organized prediction model is the trained logically organized prediction model in the logically organized prediction model training method.
The logic arrangement module 1008 is configured to present at least one prediction component in the logic arrangement interface according to the association degree, and arrange the target logic based on the at least one prediction component.
In an exemplary embodiment, the vector determination module 1004 is further configured to construct a syntax tree of the target logic based on the target logic; determining a logic sequence vector of the target logic according to the component type characteristics, the component identification characteristics and the component nesting characteristics of each node in the grammar tree of the target logic; and extracting a sub-vector with the corresponding vector length of a preset length from the logic sequence vector of the target logic by taking a vector element of a previous node of the node to be predicted in the logic sequence vector of the target logic as a last element of the logic sequence sub-vector corresponding to the node to be predicted, and taking the sub-vector with the corresponding vector length of the preset length as the logic sequence sub-vector corresponding to the node to be predicted.
The above-described logical arrangement prediction model training apparatus and each module in the logical arrangement apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In an exemplary embodiment, a computer device is provided, which may be a server, and an internal structure thereof may be as shown in fig. 11. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing relevant data of sample logic provided by the low-code platform, such as syntax trees of the sample logic, logic sequence vectors of the sample logic, context feature vectors of various variables in the sample logic and the like. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a logical orchestration prediction model training method or a logical orchestration method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an exemplary embodiment, a computer device is also provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In an exemplary embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method embodiments described above.
In an exemplary embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.
Claims (13)
1. A method of training a logically orchestrated predictive model, the method comprising:
constructing a syntax tree of sample logic for each sample logic in a low code platform; each node in the grammar tree corresponds to each variable which forms the sample logic, and the variable is an instantiation object of a component;
determining a logic sequence vector of the sample logic based on the syntax tree; one vector element in the logic sequence vector corresponds to one variable constituting the sample logic;
Sliding window processing is carried out on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic;
the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector is confirmed to be a target component of the logic sequence sub-vector;
and training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain a logic programming prediction model after training.
2. The method of claim 1, wherein the determining the logical sequence vector of the sample logic based on the syntax tree comprises:
determining component type features, component identification features and component nesting features of each node in the syntax tree based on the syntax tree; the component nesting feature is used for representing nesting relation among components;
determining the node type of each node according to the component nesting characteristics;
and according to the node type, carrying out word vector mapping processing on the component type characteristic, the component identification characteristic and the component nesting characteristic of each node in each node to obtain the logic sequence vector of the sample logic.
3. The method according to claim 2, wherein the performing word vector mapping processing on the component type feature, the component identification feature, and the component nesting feature of each of the respective nodes according to the node type includes:
under the condition that the node is a first node without nested child nodes, respectively carrying out word vector mapping processing on component type features, component identification features and component nesting features of the first node to obtain component type feature vectors, component identification feature vectors and component nesting feature vectors of the first node;
and under the condition that the node is a second node with nested child nodes, respectively carrying out word vector mapping processing on the component type feature and the component identification feature of the second node to obtain a component type feature vector and a component identification feature vector of the second node, and carrying out word vector mapping processing on the component nesting feature of the second node based on the child nodes nested by the second node to obtain a component nesting feature vector of the second node.
4. The method according to claim 1, wherein the training the logic arrangement prediction model to be trained with the logic sequence sub-vector as input information and the target component of the logic sequence sub-vector as supervision information to obtain a trained logic arrangement prediction model comprises:
Performing feature extraction processing on the logic sequence sub-vectors through a long-short-term memory artificial neural network in the logic arrangement prediction model to be trained to obtain target vectors corresponding to the logic sequence sub-vectors;
classifying the target vector through the attention mechanism network in the logic editing prediction model to be trained to obtain a prediction component corresponding to the logic sequence sub-vector;
and training the logic programming prediction model to be trained based on the difference information between the prediction component and the target component to obtain the logic programming prediction model after training.
5. The method according to claim 4, characterized in that the network of attention mechanisms in the logical orchestration prediction model to be trained is obtained by:
obtaining a context feature vector of each variable composing each sample logic in the low code platform;
and obtaining the attention mechanism network in the logic arrangement prediction model to be trained based on the context feature vectors of the variables.
6. The method of claim 5, wherein said obtaining a context feature vector for each variable comprising each sample logic in the low code platform comprises:
For each variable, determining a current position of the variable in a syntax tree of the corresponding sample logic;
determining variable frequency characteristics and variable distance characteristics of the variable based on the current position; the variable number feature is used to characterize the number of times the variable appears before the current position in the syntax tree; the variable distance feature is used to characterize the distance between the current location and the last location in the syntax tree where the variable occurred;
and obtaining the context feature vector of the variable based on the variable frequency feature and the variable distance feature of the variable.
7. A method of logic orchestration, the method comprising:
identifying a node to be predicted in a logic arrangement interface, and determining target logic to which the node to be predicted belongs;
determining a logic sequence sub-vector corresponding to the node to be predicted based on the target logic;
inputting the logic sequence sub-vector corresponding to the node to be predicted into a logic arrangement prediction model after training is completed, and obtaining at least one prediction component associated with the node to be predicted and the association degree between each prediction component and the node to be predicted; the trained logically laid-out predictive model is trained according to the method of any one of claims 1 to 6;
And displaying the at least one prediction component in the logic programming interface according to the association degree, and programming the target logic based on the at least one prediction component.
8. The method of claim 7, wherein determining the logical sequence sub-vector corresponding to the node to be predicted based on the target logic comprises:
constructing a grammar tree of the target logic based on the target logic;
determining a logic sequence vector of the target logic according to the component type characteristic, the component identification characteristic and the component nesting characteristic of each node in the grammar tree of the target logic;
and extracting a sub-vector with a corresponding vector length of a preset length from the logic sequence vector of the target logic by taking a vector element of a previous node of the node to be predicted in the logic sequence vector of the target logic as a last element of the logic sequence sub-vector corresponding to the node to be predicted, wherein the corresponding vector length is the sub-vector with the preset length, and the sub-vector is used as the logic sequence sub-vector corresponding to the node to be predicted.
9. A logic orchestration prediction model training device, the device comprising:
a grammar construction module for constructing a grammar tree of sample logic for each sample logic in a low code platform; each node in the grammar tree corresponds to each variable which forms the sample logic, and the variable is an instantiation object of a component;
A sequence determining module, configured to determine a logic sequence vector of the sample logic based on the syntax tree; one vector element in the logic sequence vector corresponds to one variable constituting the sample logic;
the sliding window processing module is used for carrying out sliding window processing on the logic sequence vector to obtain at least one logic sequence sub-vector of the sample logic;
the component determining module is used for determining the component corresponding to the next vector element of the logic sequence sub-vector in the logic sequence vector as the target component of the logic sequence sub-vector;
and the model training module is used for training the logic programming prediction model to be trained by taking the logic sequence sub-vector as input information and taking the target component of the logic sequence sub-vector as supervision information to obtain the logic programming prediction model after training.
10. A logic orchestration apparatus, the apparatus comprising:
the logic determining module is used for identifying a node to be predicted in the logic arrangement interface and determining target logic to which the node to be predicted belongs;
the vector determining module is used for determining a logic sequence sub-vector corresponding to the node to be predicted based on the target logic;
The component prediction module is used for inputting the logic sequence sub-vector corresponding to the node to be predicted into a logic arrangement prediction model after training is completed, so as to obtain at least one prediction component associated with the node to be predicted and the association degree between each prediction component and the node to be predicted; the trained logically laid-out predictive model is trained according to the method of any one of claims 1 to 6;
and the logic arrangement module is used for displaying the at least one prediction component in the logic arrangement interface according to the association degree, and arranging the target logic based on the at least one prediction component.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the logical orchestration prediction model training method according to any one of claims 1 to 6, or the steps of the logical orchestration method according to any one of claims 7 to 8.
12. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor implements the steps of the logical orchestration prediction model training method according to any one of claims 1 to 6, or the steps of the logical orchestration method according to any one of claims 7 to 8.
13. A computer program product comprising a computer program, characterized in that the computer program when executed by a processor implements the steps of the logical orchestration prediction model training method according to any one of claims 1 to 6, or the steps of the logical orchestration method according to any one of claims 7 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310715015.5A CN116757269A (en) | 2023-06-15 | 2023-06-15 | Logic programming prediction model training method, logic programming method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310715015.5A CN116757269A (en) | 2023-06-15 | 2023-06-15 | Logic programming prediction model training method, logic programming method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116757269A true CN116757269A (en) | 2023-09-15 |
Family
ID=87958374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310715015.5A Pending CN116757269A (en) | 2023-06-15 | 2023-06-15 | Logic programming prediction model training method, logic programming method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116757269A (en) |
-
2023
- 2023-06-15 CN CN202310715015.5A patent/CN116757269A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110263227B (en) | Group partner discovery method and system based on graph neural network | |
US20210264287A1 (en) | Multi-objective distributed hyperparameter tuning system | |
CN114372573B (en) | User portrait information recognition method and device, computer equipment and storage medium | |
US20190286978A1 (en) | Using natural language processing and deep learning for mapping any schema data to a hierarchical standard data model (xdm) | |
CN112836502B (en) | Financial field event implicit causal relation extraction method | |
CN112131261A (en) | Community query method and device based on community network and computer equipment | |
CN114706989A (en) | Intelligent recommendation method based on technical innovation assets as knowledge base | |
CN114579584A (en) | Data table processing method and device, computer equipment and storage medium | |
CN114358657A (en) | Post recommendation method and device based on model fusion | |
CN112257959A (en) | User risk prediction method and device, electronic equipment and storage medium | |
CN115858919A (en) | Learning resource recommendation method and system based on project field knowledge and user comments | |
CN117909517A (en) | Knowledge graph completion method, apparatus, device, storage medium, and program product | |
CN113110843A (en) | Contract generation model training method, contract generation method and electronic equipment | |
CN116757280A (en) | Knowledge graph multivariate relation link prediction method based on graph transformation network | |
CN108805290B (en) | Entity category determination method and device | |
CN116795995A (en) | Knowledge graph construction method, knowledge graph construction device, computer equipment and storage medium | |
CN116383441A (en) | Community detection method, device, computer equipment and storage medium | |
CN116975105A (en) | Data processing method and device based on rule engine and computer equipment | |
CN112734519B (en) | Commodity recommendation method based on convolution self-encoder network | |
CN116757269A (en) | Logic programming prediction model training method, logic programming method and device | |
CN115470409A (en) | Interest recommendation method, apparatus, electronic device, medium, and program product | |
CN113779248A (en) | Data classification model training method, data processing method and storage medium | |
CN112884028A (en) | System resource adjusting method, device and equipment | |
Rastogi et al. | Unsupervised Classification of Mixed Data Type of Attributes Using Genetic Algorithm (Numeric, Categorical, Ordinal, Binary, Ratio-Scaled) | |
CN114898339B (en) | Training method, device, equipment and storage medium of driving behavior prediction model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |