CN112465117B - Contract generation model construction method, device, equipment and storage medium - Google Patents

Contract generation model construction method, device, equipment and storage medium Download PDF

Info

Publication number
CN112465117B
CN112465117B CN202011342145.1A CN202011342145A CN112465117B CN 112465117 B CN112465117 B CN 112465117B CN 202011342145 A CN202011342145 A CN 202011342145A CN 112465117 B CN112465117 B CN 112465117B
Authority
CN
China
Prior art keywords
target
value
feature
model
contract
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011342145.1A
Other languages
Chinese (zh)
Other versions
CN112465117A (en
Inventor
李泽远
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011342145.1A priority Critical patent/CN112465117B/en
Publication of CN112465117A publication Critical patent/CN112465117A/en
Application granted granted Critical
Publication of CN112465117B publication Critical patent/CN112465117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Finance (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Evolutionary Computation (AREA)
  • Computer Security & Cryptography (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to model construction, and provides a contract generation model construction method, which comprises the following steps: when detecting that the participant receives the modeling instruction, acquiring contract association data corresponding to the participant, and extracting features of the contract association data to obtain a feature matrix; inputting the feature matrix into a convolution layer of a convolution neural network, and performing point multiplication calculation on the feature matrix and a preset convolution kernel to obtain target features output by the convolution layer; inputting target characteristics subjected to batch standardization treatment to an activation layer of a convolutional neural network to obtain model parameters output by the activation layer; and sending the model parameters to an organizer, and integrating and transferring the model parameters in a gradient reverse way by the organizer to obtain a contract generation model. The invention also discloses a contract generation model construction device, equipment and a storage medium. The invention realizes the automatic generation of intelligent contracts through a federal learning method.

Description

Contract generation model construction method, device, equipment and storage medium
Technical Field
The present invention relates to the field of model construction, and in particular, to a method, apparatus, device, and storage medium for constructing a contract generation model.
Background
Blockchains can provide a secure and reliable data recording architecture, an inventive technique. Blockchain systems are generally composed of a data layer, a network layer, a consensus layer, an incentive layer, a contract layer, and an application layer. The contract layer is an important component in the blockchain that can provide a flexible programmable platform for the blockchain. Through the intelligent contracts which are deployed in advance on the contract layer, a series of message transfer operation and transaction operation can be performed without any interference of external third parties, and the high efficiency and safety of the information transfer and transaction process are ensured. The intelligent contracts in the blockchain are widely applied and can be applied to the scenes of finance, medical treatment, economy sharing, education, IOT, energy trading, supply chain, copyright and the like.
However, the intelligent contracts currently deployed on the contract layer of the blockchain are not fully intelligent, and their intelligence is embodied in that when a trigger condition is reached, the relevant intelligent appointments are triggered and the delivery of the message is automatically completed. In practice, however, the intelligent contracts deployed in a blockchain are just a set of preset, fixed, logic and rules. The fixity of the intelligent contracts limits the working efficiency of the blockchain to a certain extent, different intelligent contracts are required to be deployed aiming at different specific application scenes, the investment of human resources is increased, and meanwhile, the intelligent development of the blockchain is not facilitated.
Disclosure of Invention
The invention mainly aims to provide a contract generation model construction method, device, equipment and storage medium, and aims to solve the technical problem of automatically generating intelligent contracts applicable to different blockchain application scenes according to different blockchain application scenes.
In addition, in order to achieve the above object, the present invention also provides a contract generation model construction method including the steps of:
when the participant is detected to receive a modeling instruction, contract association data corresponding to the participant is obtained, and feature extraction is carried out on the contract association data to obtain a feature matrix;
Inputting the feature matrix into a convolution layer of a convolution neural network, and performing point multiplication calculation on the feature matrix and a preset convolution kernel to obtain target features output by the convolution layer;
inputting target characteristics subjected to batch standardization processing into an activation layer of the convolutional neural network to obtain model parameters output by the activation layer;
and sending the model parameters to the organizer, and integrating and reversely transferring the model parameters by the organizer to obtain a contract generation model.
Optionally, after the step of performing the dot multiplication calculation on the feature matrix and a preset convolution kernel to obtain the target feature output by the convolution layer, the method includes:
performing normal distribution on the target features, and calculating standard deviation and mean values of the target features;
And calculating a target numerical value corresponding to the target feature according to the standard deviation and the mean value, and taking the target numerical value as the target feature subjected to batch standardization processing.
Optionally, the step of inputting the target features subjected to the batch normalization processing to an activation layer of the convolutional neural network to obtain model parameters output by the activation layer includes:
And inputting the target value to an activation layer of the convolutional neural network, modifying the target value smaller than zero into zero through a linear rectification function in the activation layer, and taking the modified target value as a model parameter output by the activation layer.
Optionally, the step of sending the model parameters to the organizer, integrating the model parameters and transferring the model parameters in a gradient reverse direction, and obtaining the contract generation model includes:
the model parameters are sent to the organizer, so that the organizer obtains target model parameters according to the model parameters and a preset integration algorithm, and the organizer sends the target model parameters to the participants;
and updating the model parameters by the participants according to the target model parameters to obtain a contract generation model.
Optionally, the step of updating the model parameters according to the target model parameters to obtain a contract generation model includes:
calculating the loss value of the target model parameter through a preset loss function;
And updating the model parameters according to the loss value and a preset gradient descent algorithm to obtain a contract generation model.
Optionally, the step of updating the model parameter according to the loss value and a preset gradient descent algorithm to obtain a contract generation model includes:
if the loss value is smaller than or equal to a preset threshold value, updating the model parameters according to the target model parameters to obtain a contract generation model;
If the loss value is larger than a preset threshold, the target model parameter is adjusted so that the loss value obtained by calculating the adjusted target model parameter through the preset loss function is smaller than or equal to the preset threshold;
and updating the model parameters according to the adjusted target model parameters to obtain a contract generation model.
Optionally, the step of inputting the feature matrix into a convolutional layer of a convolutional neural network, and performing point multiplication calculation on the feature matrix and a preset convolutional kernel to obtain target features output by the convolutional layer includes:
Acquiring the matrix side length of a preset convolution kernel, and obtaining a target side length according to the matrix side length, a preset step length and the side length of the feature matrix;
And performing point multiplication calculation on the kernel value in the preset convolution kernel and the feature value in the feature matrix to obtain a target feature value, wherein the target side length is used as the side length of a target feature, and the target feature value is used as the feature value of the target feature.
In addition, in order to achieve the above object, the present invention provides a contract generation model construction apparatus including:
the feature extraction module is used for acquiring contract association data corresponding to the participant when the participant is detected to receive the modeling instruction, and extracting features of the contract association data to obtain a feature matrix;
The convolution module is used for inputting the feature matrix into a convolution layer of a convolution neural network, and performing point multiplication calculation on the feature matrix and a preset convolution kernel to obtain target features output by the convolution layer;
The activation module is used for inputting target characteristics subjected to batch standardization processing into an activation layer of the convolutional neural network to obtain model parameters output by the activation layer;
and the model generation module is used for sending the model parameters to the organizer, and the organizer integrates and reversely transfers the model parameters to obtain a contract generation model.
In addition, in order to achieve the above object, the present invention also provides a contract generation model construction apparatus including: the system comprises a memory, a processor and a contract generation model building program stored on the memory and capable of running on the processor, wherein the contract generation model building program realizes the steps of the contract generation model building method when being executed by the processor.
In addition, in order to achieve the above object, the present invention also provides a storage medium having stored thereon a contract generation model building program which, when executed by a processor, implements the steps of the contract generation model building method as described above.
The embodiment of the invention provides a contract generation model construction method, a contract generation model construction device, contract generation model construction equipment and a contract generation model storage medium. According to the method, a contract generation model is built by using a federal learning method, when participants use contract association data to create respective models locally, feature extraction is carried out on the contract association data to obtain a feature matrix, the feature matrix is input into a convolution layer in a convolution neural network, dot multiplication calculation is carried out on the feature matrix and a preset convolution kernel to obtain target features, the target features are required to be subjected to batch standardization before being input into an activation layer, then the target features subjected to batch standardization are input into the activation layer, model parameters output by the activation layer are obtained, after the model parameters are obtained, the participants send the model parameters to an organizer, and the model parameters are integrated and transmitted in a gradient reverse mode, so that the contract generation model is finally obtained.
Drawings
FIG. 1 is a schematic diagram of a hardware structure of an implementation manner of a contract generation model construction device according to an embodiment of the present invention;
FIG. 2 is a flowchart of a first embodiment of the contract generation model building method of the present invention;
FIG. 3 is a schematic view of point multiplication of target features in a first embodiment of the contract generation model construction method of the present invention;
FIG. 4 is a schematic diagram of a nonlinear mapping in a first embodiment of the contract generation model construction method of the present invention;
FIG. 5 is a flowchart illustrating a second embodiment of a contract generation model construction method;
FIG. 6 is a schematic diagram illustrating functional blocks of an embodiment of the contract generation model building apparatus.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present invention, and have no specific meaning per se. Thus, "module," "component," or "unit" may be used in combination.
The contract generation model construction terminal (also called terminal, equipment or terminal equipment) of the embodiment of the invention can be a PC (personal computer) or a mobile terminal equipment with a display function such as a smart phone, a tablet personal computer, a portable computer and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the terminal may also include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. Among other sensors, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile terminal is stationary, and the mobile terminal can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a contract generation model building program may be included in the memory 1005 as one type of storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to invoke a contract generation model building program stored in the memory 1005, which when executed by the processor, implements the operations in the contract generation model building method provided by the embodiment described below.
Based on the above device hardware structure, an embodiment of the contract generation model construction method is provided.
Referring to fig. 2, in a first embodiment of the contract generation model construction method of the present invention, the contract generation model construction method includes:
Step S10, when the participant is detected to receive a modeling instruction, contract association data corresponding to the participant is obtained, and feature extraction is carried out on the contract association data to obtain a feature matrix.
The contract generation model construction method in the embodiment is applied to a scene of generating intelligent contracts, the known intelligent contracts are applied to contract layers of blockchain, the contract generation model is constructed through a federal learning method, participants and organizers are known to be included in federal learning, in order to protect data privacy of the participants, the participants use own data to create the model locally, model parameters are sent to the organizers, the organizers further modify the model parameters according to the model parameters provided by a plurality of participants, the modified model parameters are sent to the participants, and multiparty joint modeling is achieved under the condition that the data privacy of the participants is protected.
Taking lateral federal learning as an example, the contract-related data in this embodiment refers to model training data of the local participants, and it is known that the feature of the local data applicable to multiple participants in the lateral federal learning is the same, and the data labels are different, for example, two banks located in two places have similar business features but different clients. It is known that, after a certain participant in federal learning receives a modeling instruction, locally performs modeling by using locally stored data, and the modeling process in this embodiment includes several steps of data acquisition, data analysis, feature extraction and model evaluation and optimization, and it is known that, during the data acquisition process, the amount of acquired data should be noted to be large enough, and multiple participants should ensure that the time dimension of the acquired data is the same; during the data analysis, care should be taken to screen out defective incomplete data; in the feature extraction process, attention should be paid to extracting overlapping features among a plurality of participants, and it is known that a feature matrix is generated after feature extraction is completed, wherein each row of the matrix represents a datum, each column represents a feature, the feature is represented by "1", and the feature is not represented by "0", so that a feature matrix is obtained.
And S20, inputting the feature matrix into a convolution layer of a convolution neural network, and performing point multiplication calculation on the feature matrix and a preset convolution kernel to obtain target features output by the convolution layer.
It is known that the scheme is based on modeling of a convolutional neural network frame (namely, a frontal convolutional neural network in the embodiment), the convolutional neural network frame is known to be composed of a data input layer, a convolutional layer, an active layer and a data output layer, wherein a plurality of convolutional layers and active layers exist, a preset convolutional kernel in the implementation is a matrix of k×k, where k×k represents that the length of the matrix is k and the width of the matrix is k, k is a positive integer and is smaller than any value of the length and the width of the feature matrix, the feature matrix is input into the convolutional layer through the data input layer, and then the feature matrix and the preset convolutional kernel are subjected to dot multiplication calculation, and the dot multiplication calculation process in the scheme is known as follows: as shown in fig. 3, if the feature matrix is a matrix of 5×5, the preset convolution kernel is a matrix of 3×3, the preset step length is 1, and if the feature value is shown in the figure, the preset convolution kernel is first multiplied by a starting point from the upper left corner of the feature matrix, statistics that feature value 1 in the dashed frame overlaps with feature value 1 in the preset convolution kernel are 1, if there are 4 feature values 1 overlapping the first dashed frame and the preset convolution kernel in fig. 3, the value of the corresponding position of the obtained target feature (also a matrix) is 4, the dashed frame is slid leftwards and then downwards according to the step length until the point multiplication is completed, and finally a matrix of { (m-k)/l+1 } × { (n-k)/l+1 } is obtained, where m and n are respectively the length and the width of the feature matrix, and l is the side length of the preset convolution kernel.
And step S30, inputting the target characteristics subjected to batch normalization processing into an activation layer of the convolutional neural network to obtain model parameters output by the activation layer.
It is known that there may be a plurality of feature matrices, and thus there may be a plurality of target features, and these target features are subjected to a batch normalization process after the target features are output by the convolution layer, where the batch normalization process in this embodiment refers to: firstly, the target feature output by a convolution layer is normally distributed, the mean value and standard deviation of the target feature are calculated, then the result of batch standardization processing is calculated through a formula X= (an-Ea)/Va, wherein the result X is a matrix with specific numerical values, a represents the target feature, ea represents the mean value of all feature values of the target feature, va represents the standard value of all feature values of the target feature, an represents the nth feature value in the target feature, as shown in fig. 3, the mean value of all feature values of the target feature is 29/9, the standard deviation of all feature values of the target feature can be obtained according to a calculation formula of standard deviation, all numerical values are substituted into the formula, the result of batch standardization processing can be obtained, then the result of batch standardization processing is input into an activation layer of a convolution neural network, according to the formula X= (an-Ea)/Va, numerical values smaller than 0 can exist in the result of batch standardization processing, the activation layer can be mapped in a nonlinear manner by using a linear rectification function, specifically, as shown in fig. 4, the value smaller than 0 is mapped into a linear model, the result of the batch standardization processing is created as a linear model, and the result of the local model is kept as the value of 0, and the linear model is generated as the result of the modified model of the standard value of 0.
And step S40, the model parameters are sent to the organizer, and the organizer integrates and transfers the model parameters in a gradient reverse mode to obtain a contract generation model.
After creating the model, the participants train the model locally, the federal learning process knows that the parameters of the model trained by each participant are sent to the organizer, after the organizer receives the model parameters sent by each participant, the organizer firstly carries out integration processing on all the model parameters, namely, calculates the average value of all the model parameters, and known that the organizer receives the integrated model parameters fed back by the organizer (namely, the convolutional neural network is firstly set with a weight value, and then generates a contract loss by a contract function, and then generates a contract value by a method of generating a contract loss by a convolutional network, wherein wt+1 is equal to the integrated model parameters, nk represents the index set of data points of the kth participant, wk represents the model parameters of the kth participant after the local training, and then the organizer can send the integrated model parameters to each participant so that the participants can modify the parameters of the local model according to the integrated model parameters, specifically, the organizer receives the integrated model parameters fed back by the organizer (namely, the convolutional neural network is firstly set with a weight value, and then generates a contract loss by a contract function, and finally generates a contract loss by a contract function, and generates a contract value by a convolutional network, and finally generates a contract value by a contract layer.
Specifically, the step of refining in step S20 includes:
step a1, obtaining a matrix side length of a preset convolution kernel, and obtaining a target side length according to the matrix side length, a preset step length and the side length of the feature matrix.
And a2, determining the target characteristic value according to the kernel value in the preset convolution kernel and the characteristic value in the characteristic matrix, and taking the target side length as the side length of the target characteristic, wherein the target characteristic value is taken as the characteristic value of the target characteristic.
The preset convolution kernel in this embodiment is a k×k matrix, where k×k represents that the length of the matrix is k and k is also k, k is a positive integer and is smaller than any one of the length and the width of the feature matrix, the side length of the matrix in this embodiment is a determinant of the number of feature values in the matrix, the number of feature values in the matrix is equal to the length of the matrix multiplied by the width of the matrix, the feature matrix is input into the convolution layer through the data input layer, and then the feature matrix and the preset convolution kernel perform point multiplication computation, where it is known that if the feature matrix is a 5×5 matrix, the preset convolution kernel is a 3×3 matrix, the preset step length is 1, and the feature values in the matrix are as shown in fig. 3, the point multiplication computation in this scheme includes: firstly, starting point multiplication of a preset convolution kernel from the left upper corner of a feature matrix, calculating 1 of overlapping a feature value 1 in a dotted line frame with the feature value 1 in the preset convolution kernel, and if 4 feature values exist in the first left upper corner of the dotted line frame in fig. 3, which overlap the feature value 1 in the preset convolution kernel, the value of the corresponding position (left upper corner) of the obtained target feature (also a matrix) is 4, sliding the dotted line frame leftwards and downwards according to the step length until the dotted line frame slides to the right lower corner of the feature matrix, and after point multiplication, obtaining the target feature, wherein m and n are the length and the width of the feature matrix respectively, l is the preset step length, and k is the side length of the preset convolution kernel.
Specifically, the steps following step S20 include:
and b1, normally distributing the target features, and calculating standard deviation and mean value of the target features.
And b2, calculating a target numerical value corresponding to the target feature according to the standard deviation and the mean value, and taking the target numerical value as the target feature subjected to batch standardization processing.
As can be seen, the target feature output by the convolution layer is normally distributed, and then the average value and standard deviation of the target feature are calculated, specifically, the average value of the target feature is the average value of all feature values in the target feature, as shown in fig. 3, the average value of the target feature is equal to (4+3+4+2+3+4+2+3+4)/9, the standard deviation of the target feature is the standard deviation of all feature values in the target feature, and the result of the batch normalization process is calculated by the formula x= (an-Ea)/Va, where the result X is a matrix with specific values, a represents the target feature, ea represents the average value of all feature values of the target feature, va represents the standard value of all feature values of the target feature, an represents the nth feature value in the target feature, as shown in fig. 3, and the standard deviation of all feature values of the target feature can be obtained according to the calculation formula of the average value and standard deviation, and the result of the batch normalization process shown in fig. 4 can be obtained by substituting all values into the formula.
Specifically, the step of refining in step S30 further includes:
And c1, inputting the target value into an activation layer of the convolutional neural network, modifying the target value smaller than zero into zero through a linear rectification function in the activation layer, and taking the modified target value as a model parameter output by the activation layer.
As can be seen, when the result of the batch normalization process (i.e., the target value in the present embodiment) is input to the activation layer of the convolutional neural network, as can be seen from the above formula x= (an-Ea)/Va, there is a value smaller than 0 in the result of the batch normalization process, the activation layer uses the linear rectification function to perform nonlinear mapping on the result of the batch normalization process, specifically, the linear rectification function is f (X) =max (0, X), it can be understood that when the value in the result of the batch normalization process (i.e., the value in the linear rectification function) is smaller than 0, f (X) =0, and when the value in the result of the batch normalization process is greater than or equal to 0, f (X) =x, that is, the value greater than 0 in the result of the batch normalization process, as shown in fig. 4, is modified to be 0, the value greater than or equal to 0 is retained, and the nonlinear mapping generated by the result of the batch normalization process is used as the parameter of the model, thereby creating the local model.
In the embodiment, a contract generation model is constructed by using a federal learning method, when participants use contract association data to create respective models locally, feature extraction is performed on the contract association data to obtain a feature matrix, the feature matrix is input into a convolution layer in a convolution neural network, dot multiplication calculation is performed on the feature matrix and a preset convolution kernel to obtain target features, the target features need to be subjected to batch standardization processing before being input into an activation layer, then the target features subjected to batch standardization processing are input into the activation layer, model parameters output by the activation layer are acquired, after the model parameters are obtained, the participants send the model parameters to an organizer, and the model parameters are integrated and transmitted in a gradient reverse mode, so that the contract generation model is finally obtained.
Further, referring to fig. 5, a second embodiment of the contract generation model construction method of the present invention is proposed on the basis of the above-described embodiment of the present invention.
This embodiment is a step of refining step S40 in the first embodiment, and is different from the above-described embodiment of the present invention in that:
And step S41, the model parameters are sent to the organizer so that the organizer can obtain target model parameters according to the model parameters and a preset integration algorithm, and the organizer can send the target model parameters to the participants.
And step S42, updating the model parameters by the participants according to the target model parameters to obtain a contract generation model.
The target model parameter in this embodiment refers to an integrated model parameter, the preset integration algorithm may be a formula wt+1=n1×w1/n+n2×w2/n+ & gt, where wt+1 is equal to the integrated model parameter, nk represents an index set of data points of the kth participant, wk represents a model parameter of the kth participant after local training, the integrated model parameter is the target model parameter in this embodiment, the integrated target model parameter is sent to the participant, so that the participant uses a preset loss function and the target model parameter to update model parameters of each layer of the convolutional neural network, specifically, the local model parameter of the participant is replaced with the target model parameter, training data is input to the local model with the replaced model parameter to obtain an output result of the model, the loss value corresponding to the target model parameter is determined by calculating a deviation (i.e. a loss value) of the output result, if the loss value is greater than a preset threshold, the updated model parameter is updated to be smaller than the preset model parameter, and the updated model parameter is updated to reach a certain contract number after updating the model parameter is updated to reach a certain threshold.
Specifically, the step of refining in step S42 includes:
and d1, calculating the loss value of the target model parameter through a preset loss function.
And d2, updating the model parameters according to the loss value and a preset gradient descent algorithm to obtain a contract generation model.
It is known that the integrated target model parameters are sent to the participant, so that the participant uses a preset loss function and target model parameters to calculate a loss value of the target model parameters, and updates each layer of model parameters of the convolutional neural network according to the magnitude of the loss value, specifically, the local model parameters of the participant are replaced by the target model parameters, then training data are input into the local model with the replaced model parameters, an output result of the model is obtained, a loss value corresponding to the target model parameters is determined by calculating a deviation (i.e., the loss value) of the output result, if the loss value is greater than a preset threshold, the model parameters are updated, the updated target is that the loss value is smaller than the preset threshold, or the model parameters are updated for a certain number of times, and then the contract generation model is obtained.
Specifically, the step d2 refining step includes:
And e1, if the loss value is smaller than or equal to a preset threshold value, updating the model parameters according to the target model parameters to obtain a contract generation model.
And e2, if the loss value is larger than a preset threshold value, adjusting the target model parameter so that the loss value obtained by calculating the adjusted target model parameter through the preset loss function is smaller than or equal to the preset threshold value.
And e3, updating the model parameters according to the adjusted target model parameters to obtain a contract generation model.
It can be known that, in addition to receiving the target model parameters sent by the organizer, the participant can also adjust the model parameters by himself, for example, the iteration number of the local model is adjusted, the step length during gradient descent update is adjusted, each time the model parameters are adjusted, the loss value of the model needs to be calculated after each iteration, so that the model parameters are adjusted again according to the loss value, and the cutoff condition that the model stops performing iteration update is one of the following two conditions: (1) The model meets set accuracy requirements that are associated with a particular model design, such as loss values falling to a fairly low level (i.e., less than a preset threshold), or verification accuracy reaching a higher level. (2) reaching a preset maximum number of iterations.
In the embodiment, the participants modify the parameters of the local model according to the target model parameters sent by the organizer, and share the parameters with the data characteristics of the multiparty participants, so that the automatic generation of intelligent contracts suitable for various application scenes is realized.
In addition, referring to fig. 6, an embodiment of the present invention further proposes a contract generation model construction apparatus, including:
the feature extraction module 10 is configured to obtain contract association data corresponding to the participant when it is detected that the participant receives a modeling instruction, and perform feature extraction on the contract association data to obtain a feature matrix;
The convolution module 20 is configured to input the feature matrix into a convolution layer of a convolutional neural network, and perform dot multiplication calculation on the feature matrix and a preset convolution kernel to obtain a target feature output by the convolution layer;
the activation module 30 is configured to input target features subjected to batch normalization processing to an activation layer of the convolutional neural network, and obtain model parameters output by the activation layer;
the model generating module 40 is configured to send the model parameters to the organizer, and the organizer integrates and transfers the model parameters in a gradient reverse direction to obtain a contract generating model.
Optionally, the contract generation model building device includes:
the first calculation module is used for normally distributing the target characteristics and calculating standard deviation and mean values of the target characteristics;
and the second calculation module is used for calculating a target numerical value corresponding to the target feature according to the standard deviation and the mean value, and taking the target numerical value as the target feature subjected to batch standardization processing.
Optionally, the activating module 30 further includes:
And the numerical value modification unit is used for inputting the target numerical value to an activation layer of the convolutional neural network, modifying the target numerical value smaller than zero into zero through a linear rectification function in the activation layer, and taking the modified target numerical value as a model parameter output by the activation layer.
Optionally, the model generating module 40 includes:
The first calculation unit is used for sending the model parameters to the organizer so that the organizer can obtain target model parameters according to the model parameters and a preset integration algorithm, and the organizer can send the target model parameters to the participants;
and the first parameter updating unit is used for updating the model parameters by the participant according to the target model parameters to obtain a contract generation model.
Optionally, the parameter updating unit includes:
the second calculation unit is used for calculating the loss value of the target model parameter through a preset loss function;
And the second parameter updating unit is used for updating the model parameters according to the loss value and a preset gradient descent algorithm to obtain a contract generation model.
Optionally, the second parameter updating unit includes:
a third parameter updating unit, configured to update the model parameter according to the target model parameter if the loss value is less than or equal to a preset threshold value, to obtain a contract generation model;
The model parameter adjusting unit is used for adjusting the target model parameter if the loss value is larger than a preset threshold value, so that the loss value obtained by calculating the adjusted target model parameter through the preset loss function is smaller than or equal to the preset threshold value;
and the fourth parameter updating unit is used for updating the model parameters according to the adjusted target model parameters to obtain a contract generation model.
Optionally, the convolution module 20 includes:
The acquisition unit is used for acquiring the matrix side length of a preset convolution kernel, and acquiring a target side length according to the matrix side length, a preset step length and the side length of the feature matrix;
and the determining unit is used for performing point multiplication calculation on the kernel value in the preset convolution kernel and the characteristic value in the characteristic matrix to obtain a target characteristic value, and taking the target side length as the side length of a target characteristic, wherein the target characteristic value is taken as the characteristic value of the target characteristic.
In addition, an embodiment of the present invention also proposes a storage medium having stored thereon a contract generation model building program that, when executed by a processor, implements the operations in the contract generation model building method provided in the above embodiment.
The methods performed by the program modules may refer to various embodiments of the methods according to the present invention, and are not described herein.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity/operation/object from another entity/operation/object without necessarily requiring or implying any actual such relationship or order between such entities/operations/objects; the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The apparatus embodiments described above are merely illustrative, in which the units illustrated as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present invention. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the contract generation model building method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. A contract generation model construction method, characterized in that the contract generation model construction method is applied to participants and organizers in federal learning, the contract generation model construction method comprising the steps of:
when the participant is detected to receive a modeling instruction, contract association data corresponding to the participant is obtained, and feature extraction is carried out on the contract association data to obtain a feature matrix;
Inputting the feature matrix into a convolution layer of a convolution neural network, and performing point multiplication calculation on the feature matrix and a preset convolution kernel to obtain target features output by the convolution layer;
inputting target characteristics subjected to batch standardization processing into an activation layer of the convolutional neural network to obtain model parameters output by the activation layer;
The model parameters are sent to the organizer, and the organizer integrates and transfers the model parameters in a gradient reverse mode to obtain a contract generation model;
After the step of performing point multiplication calculation on the feature matrix and a preset convolution kernel to obtain the target feature output by the convolution layer, the method comprises the following steps: performing normal distribution on the target features, and calculating standard deviation and mean values of the target features; calculating a target value corresponding to the target feature according to the standard deviation and the mean value through a formula X= (an-Ea)/Va, and taking the target value as the target feature subjected to batch standardization processing; wherein the target value X is a matrix with specific values, a represents the target feature, ea represents the average value of all feature values of the target feature, va represents the standard value of all feature values of the target feature, and an represents the nth feature value in the target feature;
The step of inputting the target characteristics subjected to batch normalization processing to an activation layer of the convolutional neural network to obtain model parameters output by the activation layer comprises the following steps of:
Inputting the target value to an activation layer of the convolutional neural network, and performing nonlinear mapping on the target value through a linear rectification function in the activation layer, wherein the linear rectification function is f (x) =max (0, x), and x is the target value; when the target value is smaller than 0, f (x) =0, when the target value is larger than or equal to 0, f (x) =x, modifying the target value smaller than zero in the target value to zero, and reserving the value larger than or equal to 0 in the target value, wherein the modified target value is used as the model parameter output by the activation layer.
2. The contract-generation model construction method as set forth in claim 1, characterized in that said step of transmitting said model parameters to said organizer, integrating and gradient reverse transferring said model parameters, to obtain a contract-generation model includes:
the model parameters are sent to the organizer, so that the organizer obtains target model parameters according to the model parameters and a preset integration algorithm, and the organizer sends the target model parameters to the participants;
and updating the model parameters by the participants according to the target model parameters to obtain a contract generation model.
3. The method for constructing a contract generation model as recited in claim 2, wherein said step of updating said model parameters based on said target model parameters to obtain a contract generation model includes:
calculating the loss value of the target model parameter through a preset loss function;
And updating the model parameters according to the loss value and a preset gradient descent algorithm to obtain a contract generation model.
4. The contract generation model construction method as set forth in claim 3, characterized in that the step of updating the model parameters according to the loss value and a preset gradient descent algorithm to obtain a contract generation model includes:
if the loss value is smaller than or equal to a preset threshold value, updating the model parameters according to the target model parameters to obtain a contract generation model;
If the loss value is larger than a preset threshold, the target model parameter is adjusted so that the loss value obtained by calculating the adjusted target model parameter through the preset loss function is smaller than or equal to the preset threshold;
and updating the model parameters according to the adjusted target model parameters to obtain a contract generation model.
5. The contract generation model construction method as set forth in claim 1, wherein the step of inputting the feature matrix into a convolutional layer of a convolutional neural network, performing a point multiplication calculation on the feature matrix and a preset convolutional kernel, and obtaining target features output by the convolutional layer includes:
Acquiring the matrix side length of a preset convolution kernel, and obtaining a target side length according to the matrix side length, a preset step length and the side length of the feature matrix;
And performing point multiplication calculation on the kernel value in the preset convolution kernel and the feature value in the feature matrix to obtain a target feature value, wherein the target side length is used as the side length of a target feature, and the target feature value is used as the feature value of the target feature.
6. A contract generation model construction apparatus, characterized in that the contract generation model construction apparatus includes:
The feature extraction module is used for acquiring contract association data corresponding to a participant when the participant is detected to receive a modeling instruction, and extracting features of the contract association data to obtain a feature matrix;
The convolution module is used for inputting the feature matrix into a convolution layer of a convolution neural network, and performing point multiplication calculation on the feature matrix and a preset convolution kernel to obtain target features output by the convolution layer;
The activation module is used for inputting target characteristics subjected to batch standardization processing into an activation layer of the convolutional neural network to obtain model parameters output by the activation layer;
the model generation module is used for sending the model parameters to an organizer, and the organizer integrates and reversely transfers the model parameters to obtain a contract generation model;
Wherein the contract generation model construction device further includes: the first calculation module is used for normally distributing the target characteristics and calculating standard deviation and mean values of the target characteristics; the second calculation module is used for calculating a target numerical value corresponding to the target feature according to the standard deviation and the mean value through a formula X= (an-Ea)/Va, and taking the target numerical value as the target feature subjected to batch standardization processing; wherein the target value X is a matrix with specific values, a represents the target feature, ea represents the average value of all feature values of the target feature, va represents the standard value of all feature values of the target feature, and an represents the nth feature value in the target feature;
Wherein the activation module further comprises: a numerical modification unit, configured to input the target numerical value to an activation layer of the convolutional neural network, and perform nonlinear mapping on the target numerical value through a linear rectification function in the activation layer, where the linear rectification function is f (x) =max (0, x), and x is the target numerical value; when the target value is smaller than 0, f (x) =0, when the target value is larger than or equal to 0, f (x) =x, modifying the target value smaller than zero in the target value to zero, and reserving the value larger than or equal to 0 in the target value, wherein the modified target value is used as the model parameter output by the activation layer.
7. A contract generation model construction apparatus, characterized in that the contract generation model construction apparatus includes: memory, a processor, and a contract generation model building program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the contract generation model building method of any one of claims 1 to 5.
8. A storage medium having stored thereon a contract generation model building program which, when executed by a processor, implements the steps of the contract generation model building method according to any one of claims 1 to 5.
CN202011342145.1A 2020-11-25 2020-11-25 Contract generation model construction method, device, equipment and storage medium Active CN112465117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011342145.1A CN112465117B (en) 2020-11-25 2020-11-25 Contract generation model construction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011342145.1A CN112465117B (en) 2020-11-25 2020-11-25 Contract generation model construction method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112465117A CN112465117A (en) 2021-03-09
CN112465117B true CN112465117B (en) 2024-05-07

Family

ID=74808350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011342145.1A Active CN112465117B (en) 2020-11-25 2020-11-25 Contract generation model construction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112465117B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515890B (en) * 2021-05-21 2024-03-08 华北电力大学 Renewable energy day-ahead scene generation method based on federal learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876610A (en) * 2018-05-31 2018-11-23 深圳市零度智控科技有限公司 Intelligent contract implementation method, user equipment, storage medium and device
CN109656134A (en) * 2018-12-07 2019-04-19 电子科技大学 A kind of end-to-end decision-making technique of intelligent vehicle based on space-time joint recurrent neural network
CN110502898A (en) * 2019-07-31 2019-11-26 深圳前海达闼云端智能科技有限公司 Method, system, device, storage medium and the electronic equipment of the intelligent contract of audit
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111242290A (en) * 2020-01-20 2020-06-05 福州大学 Lightweight privacy protection generation countermeasure network system
WO2020140649A1 (en) * 2019-01-03 2020-07-09 深圳壹账通智能科技有限公司 Blockchain smart contract management method and apparatus, electronic device and storage medium
CN111429085A (en) * 2020-03-02 2020-07-17 中国平安人寿保险股份有限公司 Contract data generation method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876610A (en) * 2018-05-31 2018-11-23 深圳市零度智控科技有限公司 Intelligent contract implementation method, user equipment, storage medium and device
CN109656134A (en) * 2018-12-07 2019-04-19 电子科技大学 A kind of end-to-end decision-making technique of intelligent vehicle based on space-time joint recurrent neural network
WO2020140649A1 (en) * 2019-01-03 2020-07-09 深圳壹账通智能科技有限公司 Blockchain smart contract management method and apparatus, electronic device and storage medium
CN110502898A (en) * 2019-07-31 2019-11-26 深圳前海达闼云端智能科技有限公司 Method, system, device, storage medium and the electronic equipment of the intelligent contract of audit
CN111125779A (en) * 2019-12-17 2020-05-08 山东浪潮人工智能研究院有限公司 Block chain-based federal learning method and device
CN111242290A (en) * 2020-01-20 2020-06-05 福州大学 Lightweight privacy protection generation countermeasure network system
CN111429085A (en) * 2020-03-02 2020-07-17 中国平安人寿保险股份有限公司 Contract data generation method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种代币智能合约的形式化建模与验证方法;欧阳恒一;熊焰;黄文超;;计算机工程(第10期) *
基于层级注意力机制与双向长短期记忆神经网络的智能合约自动分类模型;吴雨芯;蔡婷;张大斌;;计算机应用(第04期) *

Also Published As

Publication number Publication date
CN112465117A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
US11797084B2 (en) Method and apparatus for training gaze tracking model, and method and apparatus for gaze tracking
EP3885967A1 (en) Object key point positioning method and apparatus, image processing method and apparatus, and storage medium
CN109448090B (en) Image processing method, device, electronic equipment and storage medium
WO2022022274A1 (en) Model training method and apparatus
CN108184050B (en) Photographing method and mobile terminal
US11417095B2 (en) Image recognition method and apparatus, electronic device, and readable storage medium using an update on body extraction parameter and alignment parameter
CN106534669A (en) Shooting composition method and mobile terminal
CN113601503A (en) Hand-eye calibration method and device, computer equipment and storage medium
CN112541786A (en) Site selection method and device for network points, electronic equipment and storage medium
CN112465117B (en) Contract generation model construction method, device, equipment and storage medium
US11593637B2 (en) Convolution streaming engine for deep neural networks
WO2022179603A1 (en) Augmented reality method and related device thereof
WO2024067113A1 (en) Action prediction method and related device thereof
CN116524581B (en) Human eye image facula classification method, system, equipment and storage medium
CN107807940B (en) Information recommendation method and device
CN110415171B (en) Image processing method, image processing device, storage medium and electronic equipment
CN117056589A (en) Article recommendation method and related equipment thereof
WO2023051215A1 (en) Gaze point acquisition method and apparatus, electronic device and readable storage medium
CN116259078A (en) Pesticide recommendation method, device, equipment and storage medium
CN107977628B (en) Neural network training method, face detection method and face detection device
CN114841361A (en) Model training method and related equipment thereof
KR102239355B1 (en) Image correction method and system through correction pattern analysis
CN113361380A (en) Human body key point detection model training method, detection method and device
US10891755B2 (en) Apparatus, system, and method for controlling an imaging device
CN114237861A (en) Data processing method and equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant