CN111062493B - Longitudinal federation method, device, equipment and medium based on public data - Google Patents

Longitudinal federation method, device, equipment and medium based on public data Download PDF

Info

Publication number
CN111062493B
CN111062493B CN201911333204.6A CN201911333204A CN111062493B CN 111062493 B CN111062493 B CN 111062493B CN 201911333204 A CN201911333204 A CN 201911333204A CN 111062493 B CN111062493 B CN 111062493B
Authority
CN
China
Prior art keywords
data
longitudinal
model
reinforcement learning
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911333204.6A
Other languages
Chinese (zh)
Other versions
CN111062493A (en
Inventor
梁新乐
刘洋
陈天健
董苗波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN201911333204.6A priority Critical patent/CN111062493B/en
Publication of CN111062493A publication Critical patent/CN111062493A/en
Application granted granted Critical
Publication of CN111062493B publication Critical patent/CN111062493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The application discloses a longitudinal federation method, a device, equipment and a medium based on public data, wherein the longitudinal federation method based on the public data comprises the following steps: receiving a longitudinal federal service request sent by reinforcement learning equipment, extracting target public data corresponding to the longitudinal federal service request, extracting a model of feature extraction, inputting the target public data into the feature extraction model to obtain feature vector information, and sending the feature vector information to the reinforcement learning equipment so as to carry out longitudinal federal and update a preset longitudinal federal model. The method and the device solve the technical problem of low efficiency of machine learning model construction.

Description

Longitudinal federation method, device, equipment and medium based on public data
Technical Field
The application relates to the technical field of artificial intelligence of financial technology (Fintech), in particular to a longitudinal federal method, a device, equipment and a medium based on public data.
Background
With the continuous development of financial technologies, especially internet technology and finance, more and more technologies (such as distributed, Blockchain, artificial intelligence and the like) are applied to the financial field, but the financial industry also puts higher requirements on the technologies, such as higher requirements on the distribution of backlog of the financial industry.
With the continuous development of computer software and artificial intelligence, deep learning is more and more widely applied, in the prior art, longitudinal federal learning in deep learning is usually performed among 2 longitudinal federal learning participants, and data among 2 participants can be complemented so as to jointly build a machine learning model, the amount of data owned by 2 participants is usually small, but the powerful function of deep learning is built on mass data, so that the feature richness of training samples of the machine learning model built by combining less data is low, further the computational performance and the prediction performance of the machine learning model cannot reach the expected effect, that is, the performance of the machine learning model is poor, further, the number of times of invalid training or poor training is increased by training the machine learning model by using the training samples with low feature richness, that is, the number of invalid calculations in the machine learning process is increased, so that the calculation efficiency of machine learning is reduced, the time for training the machine learning model is longer, and the construction efficiency of the machine learning model is extremely low, so that the technical problem of low construction efficiency of the machine learning model in the prior art exists.
Disclosure of Invention
The application mainly aims to provide a longitudinal federation method, a longitudinal federation device, longitudinal federation equipment and a longitudinal federation medium based on public data, and aims to solve the technical problem of low efficiency of machine learning model construction in the prior art.
In order to achieve the above object, an embodiment of the present application provides a longitudinal federal method based on public data, which is applied to a longitudinal federal device based on public data, and the longitudinal federal method based on public data includes:
receiving a longitudinal federal service request sent by reinforcement learning equipment, and extracting target public data corresponding to the longitudinal federal service request;
acquiring a feature extraction model, and inputting the target public data into the feature extraction model to acquire feature vector information;
and sending the feature vector information to the reinforcement learning equipment to carry out the longitudinal federation and update a preset longitudinal federation model.
Optionally, the federated service request includes model identification information,
the step of obtaining a feature extraction model, inputting the target public data into the feature extraction model, and obtaining feature vector information comprises:
calling a feature extraction model corresponding to the model identification information through a preset model integration module based on the model identification information;
and inputting the target public data into the feature extraction model to perform feature extraction on the target public information to obtain the feature vector information.
Optionally, the feature extraction model comprises a feature extraction neural network, the target public data comprises target picture data,
the step of inputting the target public data into the feature extraction model to perform feature extraction on the target public information to obtain the feature vector information includes:
inputting the target picture data into the feature extraction neural network to carry out convolution processing on the target public data to obtain a convolution processing result;
performing pooling treatment on the convolution treatment result to obtain a pooling treatment result;
and repeatedly and alternately carrying out convolution processing and pooling processing for preset times on the pooling processing result to obtain the feature vector information.
Optionally, the step of receiving a longitudinal federal service request sent by a reinforcement learning device and extracting target public data corresponding to the longitudinal federal service request includes:
receiving a longitudinal federal service request sent by reinforcement learning equipment, and calling initial public information of the longitudinal federal request through a preset data scheduling module;
and preprocessing the initial public information through a preset data preprocessing module to obtain the target public information.
Optionally, the step of sending the feature vector information to the reinforcement learning device to perform the longitudinal federation, and updating a preset longitudinal federation model includes:
sending the feature vector information to the reinforcement learning equipment to receive gradient information fed back by the reinforcement learning equipment;
and training and updating the preset longitudinal federal model based on the gradient information.
Optionally, the feature vector information is used as model input data of the reinforcement learning device, and the model input data is used for inputting the reinforcement learning device to calculate the gradient information corresponding to the reinforcement learning device by training the reinforcement learning device.
Optionally, the step of sending the feature vector information to the reinforcement learning device to receive gradient information fed back by the reinforcement learning device is followed by:
judging whether an updating command is received or not, and if the updating command is received, training and updating the preset longitudinal federated model based on the gradient information;
and if the updating command is not received, abandoning to update the preset longitudinal federal model.
The application also provides a longitudinal federal device based on public data, which is applied to longitudinal federal equipment based on public data, and the longitudinal federal device based on public data comprises:
the extraction module is used for receiving a longitudinal federal service request sent by the reinforcement learning equipment and extracting target public data corresponding to the longitudinal federal service request;
the characteristic extraction module is used for acquiring a characteristic extraction model, inputting the target public data into the characteristic extraction model and acquiring characteristic vector information;
and the sending module is used for sending the feature vector information to the reinforcement learning equipment so as to carry out the longitudinal federation and update a preset longitudinal federation model.
Optionally, the feature extraction module includes:
the model selection unit is used for calling a feature extraction model corresponding to the model identification information through a preset model integration module based on the model identification information;
and the feature extraction unit is used for inputting the target public data into the feature extraction model so as to perform feature extraction on the target public information and obtain the feature vector information.
Optionally, the feature extraction unit includes:
a convolution subunit, configured to input the target picture data into the feature extraction neural network, so as to perform convolution processing on the target public data, and obtain a convolution processing result;
the pooling subunit is used for pooling the convolution processing result to obtain a pooled processing result;
and the repeated transaction subunit is used for repeatedly and alternately performing the convolution processing and the pooling processing for preset times on the pooling processing result to obtain the feature vector information.
Optionally, the extraction module comprises:
the calling unit is used for receiving a longitudinal federal service request sent by the reinforcement learning equipment and calling initial public information of the longitudinal federal request through a preset data scheduling module;
and the preprocessing unit is used for preprocessing the initial public information through a preset data preprocessing module to obtain the target public information.
Optionally, the sending module includes:
the sending unit is used for sending the feature vector information to the reinforcement learning equipment so as to receive gradient information fed back by the reinforcement learning equipment;
and the training updating unit is used for training and updating the preset longitudinal federal model based on the gradient information.
Optionally, the sending module further includes:
the first judging unit is used for judging whether an updating command is received or not, and if the updating command is received, training and updating the preset longitudinal federated model based on the gradient information;
and the second judging unit is used for abandoning the updating of the preset longitudinal federal model if the updating command is not received.
The application also provides a longitudinal federal device based on public data, which comprises: a memory, a processor, and a program of the public-data based longitudinal federal method stored in the memory and executable on the processor, the program of the public-data based longitudinal federal method being executable by the processor to implement the steps of the public-data based longitudinal federal method as described above.
The present application also provides a medium, which is a readable storage medium, on which a program for implementing the public-data-based longitudinal federation method is stored, the program for implementing the public-data-based longitudinal federation method implementing the steps of the public-data-based longitudinal federation method as described above when executed by a processor.
According to the method, a longitudinal federal service request sent by reinforcement learning equipment is received, target public data corresponding to the longitudinal federal service request are extracted, a feature extraction model is further obtained, the target public data are input into the feature extraction model, feature vector information is obtained, and the feature vector information is further sent to the reinforcement learning equipment to be processed through the longitudinal federal and preset longitudinal federal models. That is, the application feeds back feature vector information corresponding to common data to the reinforcement learning device based on the longitudinal federal service request sent by the reinforcement learning device, and then performs longitudinal federal learning based on the feature vector information through the reinforcement learning device, and updates the preset longitudinal federal model, wherein the common data is massive training data, so that the number and feature richness of training samples can be greatly expanded, the purpose of training a machine learning model based on massive training data is achieved, the robustness and the universality of the machine learning model are improved, the situation that the performance of the longitudinal federal machine learning model is reduced due to too few training samples is avoided, further, the machine learning model is trained through massive training data with high feature richness, and the invalid calculation times in the machine learning process are reduced, the model training efficiency during machine learning is improved, namely, the calculation efficiency during machine learning is improved, the training time of the machine learning model is shortened, the building efficiency of the machine learning model is further improved, the condition that the building efficiency of the machine learning model is low due to low feature richness of training samples is avoided, and therefore the technical problem that the building efficiency of the machine learning model is low in the prior art is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a first embodiment of a longitudinal federation method based on public data according to the present application;
FIG. 2 is a schematic diagram of a longitudinal federation-based public information service architecture in the longitudinal federation method based on public data according to the present application;
FIG. 3 is a flow chart illustrating a second embodiment of the public data based longitudinal federation method of the present application;
fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the public data-based longitudinal federation method of the present application, referring to fig. 1, the public data-based longitudinal federation method includes:
step S10, receiving a longitudinal federal service request sent by reinforcement learning equipment, and extracting target public data corresponding to the longitudinal federal service request;
in this embodiment, it should be noted that the longitudinal federation method based on public data is applied to a public federation service provider, where the public federation service provider includes a data integration module, the data integration module includes a plurality of public information data sources, and the public information data sources include picture data, text data, and the like.
Receiving a longitudinal federal service request sent by reinforcement learning equipment, extracting target public data corresponding to the longitudinal federal service request, specifically, receiving the longitudinal federal service request sent by the reinforcement learning equipment, wherein the longitudinal federated service request comprises data source identification information, the data source identification information comprises character strings, key words and the like, and then extracting the target public data corresponding to the data source identification information at the data integration module, wherein the target common data is data related to the current state of the reinforcement learning device, the current state includes a position state, a motion state, and the like, for example, assuming that the reinforcement learning apparatus is an unmanned vehicle, and the federal service request sent by the unmanned vehicle is vehicle distribution learning requesting the current position, and the target public data is vehicle distribution information taking the unmanned vehicle as the center.
The step of receiving a longitudinal federal service request sent by reinforcement learning equipment and extracting target public data corresponding to the longitudinal federal service request includes:
step S11, receiving a longitudinal federal service request sent by a reinforcement learning device, and calling initial public information of the longitudinal federal request through a preset data scheduling module;
in this embodiment, it should be noted that the data integration module includes the preset data scheduling module, and the initial public information is an initial public information data source, that is, the initial public information is an original public information data source that is not processed.
The method comprises the steps of receiving a longitudinal federal service request sent by reinforcement learning equipment, calling initial public information of the longitudinal federal request through a preset data scheduling module, specifically, receiving the longitudinal federal service request sent by the reinforcement learning equipment, and calling the initial public information corresponding to the longitudinal federal request through the preset data scheduling module based on data source identification information in the longitudinal federal service request.
Step S12, preprocessing the initial public information through a preset data preprocessing module to obtain the target public information;
in this embodiment, the preprocessing includes image clipping processing, word deletion processing, and the like, and the data integration module includes a preset data preprocessing module.
The initial public information is preprocessed through a preset data preprocessing module to obtain the target public information, specifically, the initial public information is preprocessed through the preset data preprocessing module, information irrelevant to the current state of the reinforcement learning device in the initial public information is removed to obtain the target public information, for example, if training devices corresponding to the reinforcement learning device are unmanned vehicles, the longitudinal federal service request is to request vehicle distribution information around the unmanned vehicles, and the initial public information is about unmanned vehicle pictures, vehicle recognition is performed on the unmanned vehicle pictures, and then the unmanned vehicle pictures are cut to cut the unmanned vehicles to the center of the pictures, that is, the unmanned vehicle pictures are preprocessed.
Step S20, acquiring a feature extraction model, inputting the target public data into the feature extraction model, and acquiring feature vector information;
in this embodiment, it should be noted that the feature vector information includes all feature information of the target public data, for example, if the target public information is vehicle distribution information around an unmanned vehicle, and the feature vector information is a feature vector (a, b), a represents number information of surrounding vehicles, and b represents position information of surrounding vehicles, where a and b may be represented by characters, numbers, and the like, and the feature extraction model is a trained neural network model.
Acquiring a feature extraction model, inputting the target public data into the feature extraction model to acquire feature vector information, specifically, calling the feature extraction model through a preset model integration module based on model identification information in the longitudinal federal request, and then inputting the target public data into the feature extraction model to perform feature extraction on the target public data so as to convert the target public data into feature vectors to acquire the feature vector information.
And step S30, sending the feature vector information to the reinforcement learning equipment so as to carry out the longitudinal federation and update a preset longitudinal federation model.
In this embodiment, the preset longitudinal federal model belongs to the public federal service provider, and the reinforcement learning device is a training device corresponding to the reinforcement learning model.
Sending the feature vector information to the reinforcement learning device to perform the longitudinal federation, updating a preset longitudinal federation model, and specifically sending the feature vector information to the reinforcement learning device, wherein the feature vector information is to be used as an input of the reinforcement learning device, the reinforcement learning device trains a reward function calculated based on a current state and the feature vector information, and outputs an executable action to update the current state, wherein the reward function is used for guiding the reinforcement learning device to perform training, for example, assuming that the reinforcement learning device includes a plurality of executable actions, each of which corresponds to a selected probability, and does not fixedly output one executable action, and further, for example, assuming that a training device corresponding to the reinforcement learning model is an unmanned vehicle, the executable actions include turning, stopping, braking and the like, the reward function is calculated based on the current state, if the reward function value is positive, the selection probability of the executable action corresponding to the current state is improved, if the reward function value is negative, the selection probability of the executable action corresponding to the current state is reduced, the reinforcement learning device is guided to train, namely, the reinforcement learning model is guided to train, further, the reinforcement learning model is trained through the reinforcement learning device, a model training result is obtained, corresponding gradient information is calculated based on the model training result and the network weight parameters of the reinforcement learning model, and the public service side receives the gradient information fed back by the reinforcement learning device and is based on the gradient information, updating the preset longitudinal federation model, and as shown in fig. 2, the preset longitudinal federation model is a public information service architecture diagram based on longitudinal federation in the embodiment of the present application, where the data source 1, the data source 2, and the data source 3 are all initial public information, the data integration module includes a preset data scheduling module and a preset data preprocessing module, and the public information federation party belongs to the public federation service party and is used for performing the longitudinal federation.
The step of sending the feature vector information to the reinforcement learning device to perform the longitudinal federation, and updating a preset longitudinal federation model includes:
step S31, sending the feature vector information to the reinforcement learning device to receive gradient information fed back by the reinforcement learning device;
in this embodiment, the feature vector information is sent to the reinforcement learning device to receive gradient information fed back by the reinforcement learning device, and specifically, the feature vector information is sent to the reinforcement learning device, where the feature vector information is to be used as an input of the reinforcement learning device, the reinforcement learning device trains a reward function calculated based on a current state and the feature vector information, outputs an executable action to update the current state, obtains a next state, and calculates gradient information corresponding to the current update, where the gradient information is a gradient vector, and the gradient information is associated with a network weight parameter of a reinforcement learning model, and then receives the gradient information fed back by the reinforcement learning device.
Wherein, the step of sending the feature vector information to the reinforcement learning device to receive the gradient information fed back by the reinforcement learning device comprises the following steps:
step A10, judging whether an updating command is received, and if the updating command is received, training and updating the preset longitudinal federal model based on the gradient information;
in this embodiment, it is determined whether an update command is received, if the update command is received, the preset longitudinal federated model is trained and updated based on the gradient information, specifically, it is determined whether the update command is received, if the update command is received, the network weight parameter of the preset longitudinal federated model is adjusted based on the gradient information, so as to update the current state of the preset reinforcement learning device to the next state of the reinforcement learning device, that is, the preset longitudinal federated model is trained and updated based on the gradient information, where it is to be noted that the number of the reinforcement learning devices is one or more, and in the same longitudinal federated learning, the common service federated party receives one or more gradient information sent by one or more reinforcement learning devices, and when the gradient information is multiple, selecting one gradient information based on a user command or randomly selecting one gradient information to update the preset longitudinal federal model.
And step A20, if the updating command is not received, abandoning to update the preset longitudinal federal model.
In this embodiment, if the update command is not received, the updating of the preset longitudinal federated model is abandoned, and specifically, if the update command is not received, all gradient information received by the current longitudinal federated learning is abandoned, and the updating of the preset longitudinal federated model is abandoned.
And step S32, training and updating the preset longitudinal federal model based on the gradient information.
In this embodiment, based on the gradient information, the network weight parameter of the preset longitudinal federated model is adjusted to train and update the preset longitudinal federated model, and the current state of the preset reinforcement learning device is updated to the next state of the reinforcement learning device, where the next state of the reinforcement learning device is a result obtained by the reinforcement learning device executing the executable action in the current environment, and the current environment is determined by the model input data, that is, the current environment is determined by the feature vector information.
According to the method, the longitudinal federal service request sent by the reinforcement learning equipment is received, the target public data corresponding to the longitudinal federal service request are extracted, the feature extraction model is further obtained, the target public data are input into the feature extraction model, feature vector information is obtained, and the feature vector information is further sent to the reinforcement learning equipment so as to carry out longitudinal federal updating on the preset longitudinal federal model. That is, in this embodiment, based on the longitudinal federal service request sent by the reinforcement learning device, the feature vector information corresponding to the common data is fed back to the reinforcement learning device, and then the reinforcement learning device performs longitudinal federal learning based on the feature vector information, and updates the preset longitudinal federal model, wherein the common data is massive training data, so that the number and feature richness of the training samples can be greatly expanded, the purpose of training the machine learning model based on the massive training data is achieved, the robustness and the universality of the machine learning model are improved, the situation that the performance of the longitudinal federal machine learning model is reduced due to too few training samples is avoided, further, the machine learning model is trained through the massive training data with high feature richness, and the number of invalid calculations in the machine learning process is reduced, the model training efficiency during machine learning is improved, namely, the calculation efficiency during machine learning is improved, the training time of the machine learning model is shortened, the building efficiency of the machine learning model is further improved, the condition that the building efficiency of the machine learning model is low due to low feature richness of training samples is avoided, and therefore the technical problem that the building efficiency of the machine learning model is low in the prior art is solved.
Further, referring to fig. 3, in another embodiment of the longitudinal federal method based on public data, based on the first embodiment in the present application, the federal service request includes model identification information,
the step of obtaining a feature extraction model, inputting the target public data into the feature extraction model, and obtaining feature vector information comprises:
step S21, based on the model identification information, calling a feature extraction model corresponding to the model identification information through a preset model integration module;
in this embodiment, it should be noted that the model identification information includes a character string and a code. The method comprises the steps that information such as pictures and the like, and target public data acquired from different public information data sources need to be subjected to feature extraction by using different feature extraction models.
Step S22, inputting the target public data into the feature extraction model to perform feature extraction on the target public information, so as to obtain the feature vector information.
In this embodiment, it should be noted that the feature vector information is a vector without special meaning, and a user does not know the representative meaning of the feature vector and can perform recognition by a machine.
Inputting the target public data into the feature extraction model to perform feature extraction on the target public information to obtain the feature vector information, specifically, inputting the target public data into the feature extraction model to perform data processing on the target public data based on a data processing layer in the feature extraction model, wherein the data processing layer comprises a convolution layer, a pooling layer, a full-link layer and the like, correspondingly, the data processing comprises convolution processing, pooling processing, full-link processing and the like, and then outputting the feature vector, that is, obtaining the feature vector information.
Wherein the feature extraction model comprises a feature extraction neural network, the target public data comprises target picture data,
the step of inputting the target public data into the feature extraction model to perform feature extraction on the target public information to obtain the feature vector information includes:
step S321, inputting the target picture data into the feature extraction neural network to perform convolution processing on the target public data to obtain a convolution processing result;
in this embodiment, it should be noted that the convolution processing refers to a process of performing element-by-element multiplication and summation on an image matrix corresponding to an image and a convolution kernel to obtain an image characteristic value, where the convolution kernel refers to a weight matrix corresponding to an interface image characteristic.
Inputting the target picture data into the feature extraction neural network to perform convolution processing on the target public data to obtain a convolution processing result, specifically, inputting the target picture data into the feature extraction neural network to perform dot multiplication on an image matrix corresponding to the target picture data and the convolution kernel to obtain a new image matrix, that is, to obtain the convolution processing result.
Step S322, performing pooling treatment on the convolution treatment result to obtain a pooling treatment result;
in the present embodiment, the pooling refers to a process of integrating image feature values obtained by convolution to obtain new feature values, and the pooling includes maximum value pooling, average value pooling, random pooling, sum region pooling, and the like.
Pooling the convolution processing result to obtain a pooled processing result, specifically, dividing the convolution processing result into a plurality of image matrices with preset sizes, and if the maximum value is pooled, replacing the image matrices with the maximum pixel values of the image matrices to further obtain new image matrices, that is, pooling the processing result.
Step S323, repeatedly and alternately performing convolution processing and pooling processing for preset times on the pooling processing result to obtain the feature vector information;
in this embodiment, the convolution processing and pooling processing are alternately repeated for a preset number of times on the result of the pooling processing to obtain the feature vector information, and specifically, the steps S322 to S323 are repeated for a preset number of times to obtain the feature vector information, where the preset number of times is associated with the number of data processing layers of the feature extraction model.
In this embodiment, based on the model identification information, a feature extraction model corresponding to the model identification information is called through a preset model integration module, and then the target public data is input into the feature extraction model, so as to perform feature extraction on the target public information, and obtain the feature vector information. That is, in this embodiment, first, based on the model identification information, a preset model integration module is used to invoke a feature extraction model corresponding to the model identification information, and then the target public data is input into the feature extraction model to perform feature extraction on the target public information, so as to obtain the feature vector information. That is, in the embodiment, based on the model identification information sent by the reinforcement learning device, the corresponding feature extraction module is called to perform accurate data down-sampling on the target public data, the target public information is converted into a segment of feature vector without unique meaning, the data size of the target public information is reduced, and then the feature vector can be sent to the reinforcement learning device for federal learning, so that the data processing amount of the reinforcement learning device is reduced, the calculation efficiency is improved, and the feature extraction module and the target public data are all controlled by a public federal service side, so that the target public data is protected from being leaked, and since the data transmission is the feature vector, the encryption transmission is not needed, and the target public data is not leaked, so that the calculation process of encrypting the target public data is reduced, and the data transmission efficiency of the target public data is improved, that is, the implementation improves the data transmission efficiency and the data calculation efficiency of the training sample data under the condition of protecting the privacy of the training sample data, further improves the model training efficiency in federal learning, and lays a foundation for solving the technical problem of low machine learning model construction efficiency in the prior art.
Referring to fig. 4, fig. 4 is a schematic device structure diagram of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 4, the common data based vertical federal device can include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the vertical federal device based on public data can further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuits, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the common data based longitudinal federal device architecture depicted in fig. 4 does not constitute a limitation of common data based longitudinal federal devices and may include more or fewer components than those illustrated, or some components in combination, or a different arrangement of components.
As shown in fig. 4, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a vertical federal program based on common data. The operating system is a program that manages and controls the hardware and software resources of the longitudinal federated devices based on the common data, and supports the operation of the longitudinal federated programs based on the common data as well as other software and/or programs. The network communication module is used for realizing communication among components in the memory 1005 and communication with other hardware and software in a longitudinal federal system based on common data.
In the longitudinal federal device based on public data shown in fig. 4, the processor 1001 is configured to execute a longitudinal federal program based on public data stored in the memory 1005, and implement the steps of any one of the longitudinal federal methods based on public data described above.
The specific implementation manner of the longitudinal federal device based on public data in the present application is basically the same as that of each embodiment of the longitudinal federal method based on public data, and is not described herein again.
The embodiment of the present application further provides a longitudinal federal device based on public data, where the longitudinal federal device based on public data includes:
the extraction module is used for receiving a longitudinal federal service request sent by the reinforcement learning equipment and extracting target public data corresponding to the longitudinal federal service request;
the characteristic extraction module is used for acquiring a characteristic extraction model, inputting the target public data into the characteristic extraction model and acquiring characteristic vector information;
and the sending module is used for sending the feature vector information to the reinforcement learning equipment so as to carry out the longitudinal federation and update a preset longitudinal federation model.
Optionally, the feature extraction module includes:
the model selection unit is used for calling a feature extraction model corresponding to the model identification information through a preset model integration module based on the model identification information;
and the feature extraction unit is used for inputting the target public data into the feature extraction model so as to perform feature extraction on the target public information and obtain the feature vector information.
Optionally, the feature extraction unit includes:
a convolution subunit, configured to input the target picture data into the feature extraction neural network, so as to perform convolution processing on the target public data, and obtain a convolution processing result;
the pooling subunit is used for pooling the convolution processing result to obtain a pooled processing result;
and the repeated transaction subunit is used for repeatedly and alternately performing the convolution processing and the pooling processing for preset times on the pooling processing result to obtain the feature vector information.
Optionally, the extraction module comprises:
the calling unit is used for receiving a longitudinal federal service request sent by the reinforcement learning equipment and calling initial public information of the longitudinal federal request through a preset data scheduling module;
and the preprocessing unit is used for preprocessing the initial public information through a preset data preprocessing module to obtain the target public information.
Optionally, the sending module includes:
the sending unit is used for sending the feature vector information to the reinforcement learning equipment so as to receive gradient information fed back by the reinforcement learning equipment;
and the training updating unit is used for training and updating the preset longitudinal federal model based on the gradient information.
Optionally, the sending module further includes:
the first judging unit is used for judging whether an updating command is received or not, and if the updating command is received, training and updating the preset longitudinal federated model based on the gradient information;
and the second judging unit is used for abandoning the updating of the preset longitudinal federal model if the updating command is not received.
The specific implementation of the longitudinal federal device based on public data in the present application is basically the same as the embodiments of the longitudinal federal method based on public data, and is not described herein again.
The present application provides a medium, which is a readable storage medium and stores one or more programs, and the one or more programs are further executable by one or more processors for implementing the steps of any one of the above-mentioned public data-based longitudinal federation methods.
The specific implementation of the medium of the present application is substantially the same as that of each embodiment of the above-described longitudinal federation method based on public data, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (9)

1. A longitudinal federation method based on public data, characterized in that the longitudinal federation method based on public data comprises:
receiving a longitudinal federal service request sent by reinforcement learning equipment, and extracting target public data corresponding to the longitudinal federal service request from a data integration module, wherein the data integration module comprises a plurality of public information data sources of different types;
acquiring a feature extraction model, and inputting the target public data into the feature extraction model to acquire feature vector information;
sending the feature vector information to the reinforcement learning equipment to carry out the longitudinal federation and update a preset longitudinal federation model;
wherein the longitudinal federated service request includes model identification information,
the step of obtaining a feature extraction model, inputting the target public data into the feature extraction model, and obtaining feature vector information comprises:
calling a feature extraction model corresponding to the model identification information through a preset model integration module based on the model identification information;
and inputting the target public data into the feature extraction model to perform feature extraction on the target public data to obtain the feature vector information.
2. A longitudinal federated method based on public data as recited in claim 1, wherein the feature extraction model includes a feature extraction neural network, the target public data includes target picture data,
the step of inputting the target public data into the feature extraction model to perform feature extraction on the target public data to obtain the feature vector information includes:
inputting the target picture data into the feature extraction neural network to carry out convolution processing on the target public data to obtain a convolution processing result;
performing pooling treatment on the convolution treatment result to obtain a pooling treatment result;
and repeatedly and alternately carrying out convolution processing and pooling processing for preset times on the pooling processing result to obtain the feature vector information.
3. The public data-based longitudinal federation method of claim 1, wherein the step of receiving a longitudinal federation service request sent by a reinforcement learning device and extracting target public data corresponding to the longitudinal federation service request comprises:
receiving a longitudinal federal service request sent by reinforcement learning equipment, and calling initial public information of the longitudinal federal service request through a preset data scheduling module;
and preprocessing the initial public information through a preset data preprocessing module to obtain the target public data.
4. A longitudinal federation method based on public data as claimed in claim 1 wherein the step of sending the feature vector information to the reinforcement learning device for longitudinal federation, updating a preset longitudinal federation model comprises:
sending the feature vector information to the reinforcement learning equipment to receive gradient information fed back by the reinforcement learning equipment;
and training and updating the preset longitudinal federal model based on the gradient information.
5. A public-data-based longitudinal federation method as recited in claim 4, wherein the feature vector information is used as model input data for the reinforcement learning device, the model input data being used to input the reinforcement learning device to calculate the gradient information for the reinforcement learning device by training the reinforcement learning device.
6. A longitudinal federated method based on public data as recited in claim 4, wherein the step of sending the feature vector information to the reinforcement learning device to receive gradient information fed back by the reinforcement learning device is followed by:
judging whether an updating command is received or not, and if the updating command is received, training and updating the preset longitudinal federated model based on the gradient information;
and if the updating command is not received, abandoning to update the preset longitudinal federal model.
7. A longitudinal public data-based federation apparatus, comprising:
the extraction module is used for receiving a longitudinal federal service request sent by reinforcement learning equipment and extracting target public data corresponding to the longitudinal federal service request from the data integration module, wherein the data integration module comprises a plurality of public information data sources of different types;
the characteristic extraction module is used for acquiring a characteristic extraction model, inputting the target public data into the characteristic extraction model and acquiring characteristic vector information;
the sending module is used for sending the feature vector information to the reinforcement learning equipment so as to carry out the longitudinal federation and update a preset longitudinal federation model;
wherein the federated service request includes model identification information,
the feature extraction module is further to:
calling a feature extraction model corresponding to the model identification information through a preset model integration module based on the model identification information;
and inputting the target public data into the feature extraction model to perform feature extraction on the target public data to obtain the feature vector information.
8. Public data-based longitudinal federal device, comprising: a memory, a processor, and a program stored on the memory for implementing the common data based vertical federation method,
the memory is used for storing a program for realizing a longitudinal federal method based on public data;
the processor is configured to execute a program implementing the public-data based vertical federal method for implementing the steps of the public-data based vertical federal method as claimed in any of claims 1 to 6.
9. A medium having stored thereon a program for implementing a public-data based longitudinal federated method, the program being executed by a processor to implement the steps of the public-data based longitudinal federated method as recited in any one of claims 1 to 6.
CN201911333204.6A 2019-12-20 2019-12-20 Longitudinal federation method, device, equipment and medium based on public data Active CN111062493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911333204.6A CN111062493B (en) 2019-12-20 2019-12-20 Longitudinal federation method, device, equipment and medium based on public data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911333204.6A CN111062493B (en) 2019-12-20 2019-12-20 Longitudinal federation method, device, equipment and medium based on public data

Publications (2)

Publication Number Publication Date
CN111062493A CN111062493A (en) 2020-04-24
CN111062493B true CN111062493B (en) 2021-06-15

Family

ID=70301446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911333204.6A Active CN111062493B (en) 2019-12-20 2019-12-20 Longitudinal federation method, device, equipment and medium based on public data

Country Status (1)

Country Link
CN (1) CN111062493B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553470B (en) * 2020-07-10 2020-10-27 成都数联铭品科技有限公司 Information interaction system and method suitable for federal learning
CN113282933B (en) * 2020-07-17 2022-03-01 中兴通讯股份有限公司 Federal learning method, device and system, electronic equipment and storage medium
CN112001500B (en) * 2020-08-13 2021-08-03 星环信息科技(上海)股份有限公司 Model training method, device and storage medium based on longitudinal federated learning system
CN112651446B (en) * 2020-12-29 2023-04-14 杭州趣链科技有限公司 Unmanned automobile training method based on alliance chain
CN113723621B (en) * 2021-04-19 2024-02-06 京东科技控股股份有限公司 Longitudinal federal learning modeling method, device, equipment and computer medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012592A1 (en) * 2017-07-07 2019-01-10 Pointr Data Inc. Secure federated neural networks
CN109242099A (en) * 2018-08-07 2019-01-18 中国科学院深圳先进技术研究院 Training method, device, training equipment and the storage medium of intensified learning network
CN109255013A (en) * 2018-08-14 2019-01-22 平安医疗健康管理股份有限公司 Claims Resolution decision-making technique, device, computer equipment and storage medium
CN109711529A (en) * 2018-11-13 2019-05-03 中山大学 A kind of cross-cutting federal learning model and method based on value iterative network
CN110263908A (en) * 2019-06-20 2019-09-20 深圳前海微众银行股份有限公司 Federal learning model training method, equipment, system and storage medium
CN110428058A (en) * 2019-08-08 2019-11-08 深圳前海微众银行股份有限公司 Federal learning model training method, device, terminal device and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106663184A (en) * 2014-03-28 2017-05-10 华为技术有限公司 Method and system for verifying facial data
CN106650806B (en) * 2016-12-16 2019-07-26 北京大学深圳研究生院 A kind of cooperating type depth net model methodology for pedestrian detection
CN108512841B (en) * 2018-03-23 2021-03-16 四川长虹电器股份有限公司 Intelligent defense system and method based on machine learning
CN109614238B (en) * 2018-12-11 2023-05-12 深圳市网心科技有限公司 Target object identification method, device and system and readable storage medium
CN110363305B (en) * 2019-07-17 2023-09-26 深圳前海微众银行股份有限公司 Federal learning method, system, terminal device and storage medium
CN110598739B (en) * 2019-08-07 2023-06-23 广州视源电子科技股份有限公司 Image-text conversion method, image-text conversion equipment, intelligent interaction method, intelligent interaction system, intelligent interaction equipment, intelligent interaction client, intelligent interaction server, intelligent interaction machine and intelligent interaction medium
CN110569920B (en) * 2019-09-17 2022-05-10 国家电网有限公司 Prediction method for multi-task machine learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012592A1 (en) * 2017-07-07 2019-01-10 Pointr Data Inc. Secure federated neural networks
CN109242099A (en) * 2018-08-07 2019-01-18 中国科学院深圳先进技术研究院 Training method, device, training equipment and the storage medium of intensified learning network
CN109255013A (en) * 2018-08-14 2019-01-22 平安医疗健康管理股份有限公司 Claims Resolution decision-making technique, device, computer equipment and storage medium
CN109711529A (en) * 2018-11-13 2019-05-03 中山大学 A kind of cross-cutting federal learning model and method based on value iterative network
CN110263908A (en) * 2019-06-20 2019-09-20 深圳前海微众银行股份有限公司 Federal learning model training method, equipment, system and storage medium
CN110428058A (en) * 2019-08-08 2019-11-08 深圳前海微众银行股份有限公司 Federal learning model training method, device, terminal device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Federated Transfer Reinforcement Learning for Autonomous Driving;Xinle Liang et al;《arXiv:1910.06001v1》;20191014;1-17 *

Also Published As

Publication number Publication date
CN111062493A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
CN111062493B (en) Longitudinal federation method, device, equipment and medium based on public data
US11816884B2 (en) Attention-based image generation neural networks
CN107578017B (en) Method and apparatus for generating image
KR102253627B1 (en) Multiscale image generation
CN113627085B (en) Transverse federal learning modeling optimization method, equipment and medium
CN107609506B (en) Method and apparatus for generating image
CN108230346B (en) Method and device for segmenting semantic features of image and electronic equipment
US11144782B2 (en) Generating video frames using neural networks
CN110413812B (en) Neural network model training method and device, electronic equipment and storage medium
US11514263B2 (en) Method and apparatus for processing image
CN110059623B (en) Method and apparatus for generating information
US20220129740A1 (en) Convolutional neural networks with soft kernel selection
KR20200102409A (en) Key frame scheduling method and apparatus, electronic devices, programs and media
CN112434620B (en) Scene text recognition method, device, equipment and computer readable medium
CN114612688B (en) Countermeasure sample generation method, model training method, processing method and electronic equipment
CN108241855B (en) Image generation method and device
CN110795235A (en) Method and system for deep learning and cooperation of mobile web
CN108229650B (en) Convolution processing method and device and electronic equipment
CN110046571B (en) Method and device for identifying age
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
US10984542B2 (en) Method and device for determining geometric transformation relation for images
CN116597430A (en) Article identification method, apparatus, electronic device, and computer-readable medium
CN116129534A (en) Image living body detection method and device, storage medium and electronic equipment
CN114862720A (en) Canvas restoration method and device, electronic equipment and computer readable medium
CN111950015A (en) Data open output method and device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant