CN113344131A - Network training method and device, electronic equipment and storage medium - Google Patents
Network training method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113344131A CN113344131A CN202110737722.5A CN202110737722A CN113344131A CN 113344131 A CN113344131 A CN 113344131A CN 202110737722 A CN202110737722 A CN 202110737722A CN 113344131 A CN113344131 A CN 113344131A
- Authority
- CN
- China
- Prior art keywords
- network
- server
- coding
- prediction
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 232
- 238000000034 method Methods 0.000 title claims abstract description 91
- 238000003860 storage Methods 0.000 title claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 53
- 230000008569 process Effects 0.000 claims description 30
- 230000009466 transformation Effects 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 14
- 238000010586 diagram Methods 0.000 description 24
- 238000004891 communication Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 230000000052 comparative effect Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000006185 dispersion Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present disclosure relates to a network training method and apparatus, an electronic device, and a storage medium, where the method is applied to a client, and the method includes: receiving a server coding network parameter and a server prediction network parameter sent by a server; training a first coding network, a second coding network and a first prediction network deployed in a client based on a server coding network parameter, a server prediction network parameter and a local image data set to obtain a first coding network parameter corresponding to the first coding network and a first prediction network parameter corresponding to the first prediction network; sending the first coding network parameter and the first prediction network parameter to the server so as to update the server coding network parameter and the server prediction network parameter in the server; and iteratively executing the steps until the iterative training meets the preset training condition, wherein the first coding network and/or the server coding network after the iterative training is used for carrying out image processing on the image to be processed.
Description
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a network training method and apparatus, an electronic device, and a storage medium.
Background
In the field of machine learning, feature learning has been receiving a great deal of attention. Unlike predictive learning, the purpose of feature learning is not to learn an observation from raw data, but rather to learn the underlying structure of the raw data, and thus analyze other characteristics of the raw data. For example, the input data are pictures, videos, phonetic characters, etc., which are high-dimensional and redundantly complex data, and conventional manual feature extraction has become impractical, and in recent years, it has been widely focused on learning features using deep learning. The learned features can be further used in other tasks downstream, for example, the features learned by the image classification task can be used in tasks such as object detection and scene segmentation.
Conventional data for unsupervised feature learning is typically downloaded from the web, e.g., data sets such as ImageNet. However, with the explosive growth of edge devices (e.g., cameras, mobile phones, etc.) in recent years, a large amount of picture data is available for learning on the edge devices, and a coding network obtained by learning the data can be more suitable for feature learning in a corresponding scene. But for privacy protection reasons, these data cannot all be aggregated onto one server. Therefore, a coding network training method capable of avoiding privacy data disclosure is needed.
Disclosure of Invention
The disclosure provides a network training method and device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a network training method, which is applied to a client in which a first coding network, a second coding network, a first prediction network and a local image data set are deployed, the method including: receiving a server coding network parameter and a server prediction network parameter sent by a server, wherein a server coding network and a server prediction network are deployed in the server, and the server coding network parameter and the server prediction network parameter are obtained by training the server on the server coding network and the server prediction network by combining at least two clients; training the first coding network, the second coding network and the first prediction network based on the server-side coding network parameters, the server-side prediction network parameters and the local image data set to obtain first coding network parameters corresponding to the first coding network and first prediction network parameters corresponding to the first prediction network; sending the first coding network parameter and the first prediction network parameter to the server, wherein the first coding network parameter and the first prediction network parameter are used for updating the server coding network parameter and the server prediction network parameter in the server; and iteratively executing the steps until iterative training meets a preset training condition, wherein the first coding network and/or the server coding network after iterative training is used for carrying out image processing on the image to be processed.
In a possible implementation manner, the training the first coding network, the second coding network, and the first prediction network based on the server-side coding network parameter, the server-side prediction network parameter, and the local image dataset to obtain a first coding network parameter corresponding to the first coding network and a first prediction network parameter corresponding to the first prediction network includes: updating the current coding network parameter corresponding to the first coding network into the server coding network parameter to obtain the updated first coding network; determining the similarity between the server-side coding network parameter and the current coding network parameter before the first coding network is updated; updating the current predicted network parameters corresponding to the first predicted network based on the similarity and the server predicted network parameters to obtain the updated first predicted network; training the updated first coding network and the updated first prediction network based on the local image dataset to obtain the first coding network parameter and the first prediction network parameter.
In a possible implementation manner, the determining a similarity between the server-side encoded network parameter and the current encoded network parameter before the first encoded network update includes: determining the Euclidean distance between the server side coding network parameter and the current coding network parameter before updating; and determining the Euclidean distance as the similarity.
In a possible implementation manner, the updating, based on the similarity and the predicted network parameter of the server, a current predicted network parameter corresponding to the first predicted network to obtain an updated first predicted network includes: under the condition that the similarity is smaller than a similarity threshold value, updating the current prediction network parameter to the server side prediction network parameter to obtain an updated first prediction network; or, in the case that the similarity is greater than or equal to the similarity threshold, keeping the current predicted network parameter unchanged.
In one possible implementation, the local image dataset includes a plurality of target images; the training the updated first coding network and the updated first prediction network based on the local image dataset to obtain the first coding network parameter and the first prediction network parameter includes: respectively carrying out two different image transformation processes on each target image to obtain a first transformation image and a second transformation image corresponding to each target image; performing image processing on the first transformed image corresponding to each target image by using the updated first coding network and the updated first prediction network to obtain a first output vector corresponding to each target image; performing image processing on the second transformed image corresponding to each target image by using the second coding network to obtain a second output vector corresponding to each target image; determining a contrast training loss corresponding to each target image according to the first output vector and the second output vector corresponding to each target image; and adjusting the updated first coding network and the updated network parameters of the first prediction network according to the comparison training loss corresponding to each target image to obtain the first coding network parameters and the first prediction network parameters.
In one possible implementation, the method further includes: and updating the current coding network parameter corresponding to the second coding network by using the exponential moving average value of the first coding network parameter based on the preset weight.
According to an aspect of the present disclosure, a network training method is provided, where the method is applied to a server, where a server coding network and a server prediction network are deployed, and the method includes: receiving a first coding network parameter and a first prediction network parameter sent by at least two clients, wherein each client is respectively deployed with a first coding network, a second coding network, a first prediction network and a local image data set, and the first coding network parameter and the first prediction network parameter are obtained by training the first coding network, the second coding network and the first prediction network based on the local image data set; updating the server side coding network according to the first coding network parameter to obtain a server side coding network parameter, and updating the server side prediction network according to the first prediction network parameter to obtain a server side prediction network parameter; sending the server-side encoded network parameters and the server-side predicted network parameters to each of the clients, wherein the server-side encoded network parameters and the server-side predicted network parameters are used for updating the first encoded network parameters and the first predicted network parameters in each of the clients; and iteratively executing the steps until iterative training meets a preset training condition, wherein the server coding network and/or the first coding network after iterative training is used for carrying out image processing on the image to be processed.
According to an aspect of the present disclosure, there is provided a network training apparatus, which is applied to a client in which a first encoding network, a second encoding network, a first prediction network and a local image data set are deployed, the apparatus including: the server side coding network parameter and the server side prediction network parameter are obtained by the server side combining at least two clients to train the server side coding network and the server side prediction network; a training module, configured to train the first coding network, the second coding network, and the first prediction network based on the server-side coding network parameter, the server-side prediction network parameter, and the local image dataset, to obtain a first coding network parameter corresponding to the first coding network and a first prediction network parameter corresponding to the first prediction network; a sending module, configured to send the first encoded network parameter and the first predicted network parameter to the server, where the first encoded network parameter and the first predicted network parameter are used to update the server encoded network parameter and the server predicted network parameter in the server; and the iteration module is used for iteratively executing the steps until the iterative training meets a preset training condition, and the first coding network and/or the server coding network after the iterative training is used for carrying out image processing on the image to be processed.
According to an aspect of the present disclosure, a network training apparatus is provided, where the apparatus is applied to a server, where a server coding network and a server prediction network are deployed, and the apparatus includes: the system comprises a receiving module, a prediction module and a prediction module, wherein the receiving module is used for receiving a first coding network parameter and a first prediction network parameter sent by at least two clients, a first coding network, a second coding network, a first prediction network and a local image data set are respectively deployed in each client, and the first coding network parameter and the first prediction network parameter are obtained by training the first coding network, the second coding network and the first prediction network based on the local image data set; the updating module is used for updating the server side coding network according to the first coding network parameter to obtain a server side coding network parameter, and updating the server side prediction network according to the first prediction network parameter to obtain a server side prediction network parameter; a sending module, configured to send the server-side encoded network parameter and the server-side predicted network parameter to each client, where the server-side encoded network parameter and the server-side predicted network parameter are used to update the first encoded network parameter and the first predicted network parameter in each client; and the iteration module is used for iteratively executing the steps until the iterative training meets a preset training condition, and the server-side coding network and/or the first coding network after the iterative training is used for carrying out image processing on the image to be processed.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, in a client deployed with a first coding network, a second coding network, a first prediction network and a local image dataset, a server coding network parameter and a server prediction network parameter sent by a server are received, wherein the server is deployed with the server coding network and the server prediction network, and the server coding network parameter and the server prediction network parameter are obtained by training the server coding network and the server prediction network by combining at least two clients; training a first coding network, a second coding network and a first prediction network based on the server coding network parameters, the server prediction network parameters and the local image data set to obtain first coding network parameters corresponding to the first coding network and first prediction network parameters corresponding to the first prediction network; sending a first coding network parameter and a first prediction network parameter to a server, wherein the first coding network parameter and the first prediction network parameter are used for updating the server coding network parameter and the server prediction network parameter in the server; and iteratively executing the steps until the iterative training meets the preset training condition, wherein the first coding network and/or the server coding network after the iterative training is used for carrying out image processing on the image to be processed.
The client-side joint server side carries out network training of multiple training rounds, in the joint training process, the client-side carries out comparison learning through a twin network comprising an online network (a first coding network and a first prediction network) and a target network (a second coding network), unsupervised training of the client-side is effectively achieved, in the training process, the client-side only uploads network parameters (network parameters corresponding to the first coding network and network parameters corresponding to the first prediction network) representing the online network output by the latest training to the server-side for aggregation, and therefore on the premise of protecting data privacy, the client-side is effectively trained in a personalized mode in the joint training process, and the training effect of the joint training is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a network training method in accordance with an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a server-client joint training system, according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a network training method according to an embodiment of the present disclosure;
FIG. 4 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure;
fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flow diagram of a network training method according to an embodiment of the present disclosure. The network training method can be executed by a client, wherein a first coding network, a second coding network, a first prediction network and a local image data set are deployed in the client. In some possible implementations, the network training method may be implemented by a client invoking computer readable instructions stored in a memory. As shown in fig. 1, the network training method may include:
in step S11, a server-side encoded network parameter and a server-side predicted network parameter sent by a server are received, where a server-side encoded network and a server-side predicted network are deployed in the server, and the server-side encoded network parameter and the server-side predicted network parameter are obtained by training the server-side encoded network and the server-side predicted network by the server in combination with at least two clients.
The client is connected with the server, and the server can perform network training in combination with the client based on a federated learning algorithm. In the embodiment of the present disclosure, at least two clients are connected to the server, and the specific number of the at least two clients may be determined according to actual situations, and may be two or more.
The server encoding network and the first encoding network have the same network structure. Because the training data needed in the training process may relate to privacy data such as human faces, human bodies, personal identities and the like, in order to protect data privacy, by using the method of the embodiment of the disclosure, the client trains the first coding network based on the local image data set, and the server fuses network parameters after the training of the first coding networks of the plurality of clients connected with the server, so as to realize the training of the server. For example, the server-side encoding network and the first encoding network may be a pedestrian re-identification network, a face recognition network, an identity recognition network, a feature learning network, and the like, which is not specifically limited in this disclosure.
The server side is provided with a server side coding network and a server side prediction network, the server side is combined with at least two client sides connected with the server side, after the server side coding network and the server side prediction network which are arranged in the server side are trained, server side coding network parameters and the server side prediction network parameters are obtained, and then the server side coding network parameters and the server side prediction network parameters are issued to each client side, so that each client side receives the server side coding network parameters and the server side prediction network parameters.
In step S12, based on the server-side encoded network parameter, the server-side predicted network parameter, and the local image dataset, the first encoding network, the second encoding network, and the first prediction network are trained to obtain a first encoded network parameter corresponding to the first encoding network and a first prediction network parameter corresponding to the first prediction network.
The client is connected with the image acquisition device, and can acquire image data from the image acquisition device to construct a local image data set. The local image dataset deployed in the client is label-free data, and in order to perform unsupervised network training by using the label-free local image dataset in the client, a twin network capable of performing comparative training is deployed in the client.
FIG. 2 shows a schematic diagram of a server-client joint training system according to an embodiment of the present disclosure. As shown in fig. 2, an asymmetric twin network is deployed in the client, and the asymmetric twin network includes an online network and a target network. Wherein the online network comprises a first coding network and a first prediction network, and the target network comprises a second coding network. Based on the comparison training of the online network and the target network in the twin network, the unsupervised network training can be realized by using the unlabeled local image data set in the client.
The second coding network is deployed in the target network, so that a positive regression target output is generated in the training process for comparison with the first coding network deployed in the online network. The first encoding network and the second encoding network may share the same network structure but have different network parameters. For example, the network architecture of the first encoding network and the second encoding network may be ResNet50 or the like, which is not specifically limited by this disclosure.
The first prediction network is deployed in the online network to realize the asymmetric characteristics of the online network and the target network, so that the performance of the comparative training of the twin network can be improved. The network structure of the first prediction network may be an mlp (multi layer per-ware) or similar network, which is not specifically limited by this disclosure.
The client side carries out comparison training by utilizing the asymmetric twin network deployed in the client side based on the server side coding network parameter and the server side prediction network parameter sent by the server side, namely, the first coding network, the second coding network and the first prediction network are trained to obtain the network parameter (the first coding network parameter) generated after the first coding network training, the network parameter (the first prediction network parameter) generated after the first prediction network training and the network parameter generated after the second coding network training. Hereinafter, the comparison training process of the first coding network, the second coding network, and the first prediction network will be described in detail with reference to possible implementation manners of the present disclosure, and will not be described herein again.
In step S13, the first encoded network parameter and the first predicted network parameter are sent to the server, where the first encoded network parameter and the first predicted network parameter are used to update the server encoded network parameter and the server predicted network parameter in the server.
The client sends the first coding network parameter and the first prediction network parameter obtained by local training to the server, and the server realizes training by fusing the network parameters sent by the client without sending image data to the server by the client, so that data privacy can be protected. The server side fuses the first coding network parameters uploaded by at least two clients connected with the server side so as to update the coding network parameters of the server side; and the server side fuses the first predicted network parameters uploaded by at least two clients connected with the server side so as to update the predicted network parameters of the server side.
In step S14, the above steps are iteratively performed until the iterative training satisfies a preset training condition, and the first coding network and/or the server coding network after the iterative training is used to perform image processing on the image to be processed.
The preset training condition may be determined according to an actual situation, for example, the preset training condition may be a preset number of training rounds, and may be a network convergence degree, which is not specifically limited by the present disclosure.
And the server side and the client side are combined to carry out iterative training so as to obtain a trained server side coding network in the server side and a trained first coding network in the client side. The trained server-side coding network and the first coding network can be used for processing images to be processed. The trained server network obtained by the server is obtained by combining at least two clients for training, has higher universality and robustness, and is suitable for image processing of images to be processed under various application scenes. The trained first coding network obtained by the client is obtained by training based on the local image data set, has higher individuation, and has higher processing precision when the client performs image processing locally.
In practical application, the installation areas, the installation angles, the data acquisition amounts and the acquired data types corresponding to the image acquisition devices connected with different clients are different, so that local image data sets in different clients are non-independently and uniformly distributed (non-IID). For example, the local image dataset in the first client corresponds to data class a, data class B, and data class C, while the local image dataset in the second client corresponds to data class D and data class E. Therefore, when the server-side encoding network and the first encoding network are feature learning networks, the local image dataset in each client does not have the full-data-class characteristic, so that the trained first encoding network obtained by performing feature learning training on each client based on the local image dataset is poor in performance and cannot be applied to downstream tasks (for example, tasks such as object detection and scene segmentation).
In the embodiment of the disclosure, under the condition that the server-side coding network and the first coding network are feature learning networks, the server side can effectively implement joint training by using non-full-class data dispersed in different clients on the premise of protecting data privacy based on a federal learning algorithm, so that the server-side coding network with higher network performance can be obtained by training in the server side, and can be better applied to a downstream task to perform image processing on an image to be processed.
According to the embodiment of the disclosure, a client side is combined with a server side to perform network training with multiple training rounds, in the process of the combined training, the client side performs comparative learning through a twin network comprising an online network (a first coding network and a first prediction network) and a target network (a second coding network), unsupervised training of the client side is effectively achieved, in the training process, the client side only uploads network parameters (network parameters corresponding to the first coding network and network parameters corresponding to the first prediction network) representing the online network output by the latest training to the server side to perform aggregation, and therefore on the premise of protecting data privacy, individualized training of the client side in the process of the combined training is effectively achieved, and training effects of the combined training are improved.
Still taking the above fig. 2 as an example, as shown in fig. 2, the system includes a server and two clients (a first client and a second client). A twin network is deployed in each client (e.g., a frontend device with network training functionality). The twin network in each client includes an online network (including a first encoding network and a first prediction network) and a target network (including a second encoding network). And a server coding network and a server prediction network are deployed in the server.
Before training is not started, a server initializes a server coding network and a server prediction network deployed in the local, and server coding network parameters generated in the 0 th training round are obtainedAnd predicting network parameters by the serverThe server side encodes the network parameters of the server sideAnd predicting network parameters by the serverSent to both clients to initiate iterative training.
In the 1 st training round, for any client, receiving the server coding network parameters sent by the serverAnd predicting network parameters by the serverUpdating the current coding network parameters corresponding to the first coding network and the second coding network into the coding network parameters of the server sideUpdating the current predicted network parameter corresponding to the first predicted network into the server-side predicted network parameterThe client performs local training in 1 st training round on the updated first coding network, second coding network and first prediction network based on the local image dataset.
Except in the initialization process, the r-th-1 training round is started and is based on the server coding network parameters generated by the r-th-0 training roundAnd predicting network parameters by the serverIn addition to the current coding networks corresponding to the first coding network and the second coding network and the current prediction network parameters corresponding to the first prediction network, in each training round after the start of the subsequent iterative training, the current prediction network parameters corresponding to the first prediction network are dynamically updated based on discrete perception of the training process (DAPU), and the current network parameters corresponding to the second coding network are updated based on the current coding network parameters corresponding to the first coding network.
In a possible implementation manner, training a first coding network, a second coding network, and a first prediction network based on a server coding network parameter, a server prediction network parameter, and a local image dataset to obtain a first coding network parameter corresponding to the first coding network and a first prediction network parameter corresponding to the first prediction network includes: updating the current coding network parameters corresponding to the first coding network into server-side coding network parameters to obtain an updated first coding network; determining the similarity between the server side coding network parameter and the current coding network parameter before the first coding network is updated; updating the current predicted network parameters corresponding to the first predicted network based on the similarity and the server predicted network parameters to obtain an updated first predicted network; and training the updated first coding network and the updated first prediction network based on the local image data set to obtain a first coding network parameter and a first prediction network parameter.
The aim of training by combining the server and the client is to train in the server to obtain a coding network with higher network performance so as to be applied to subsequent downstream tasks. In each training round, based on the server encoding network parameters obtained by the server aggregating a plurality of clients, the current network parameters of the first encoding network in the client online network are directly updated, so that the joint training of the first encoding network in the client and the server encoding network in the server is realized.
Because the predictor in the client is the last layer of the online network and contains more local knowledge of the client, in order to enable the server to learn the knowledge of the client better, how to update the current prediction network parameters of the first prediction network can be dynamically selected according to the similarity between the coding network parameters of the server and the current coding network parameters of the first coding network before updating.
The server combines the r-1 training round with the server coding network parameters generated by at least two clientsAnd predicting network parameters by the serverSent to each client to perform the training of the r-th training round. Aiming at any client, the client receives the encoded network parameters of the serverAnd predicting network parameters by the serverThen, the network parameters are coded directly by using the serverUpdating the first coding network, i.e. updating the current coding network parameters corresponding to the first coding network(generated by the first coding network after the r-1 st training round) to the server-side coding network parametersFurther determining the current encoding network parameters before updatingAnd server side encoded network parametersAnd dynamically selecting whether to utilize the server to predict the network parameters according to the similarityThe first prediction network is updated.
In a possible implementation manner, determining a similarity between a server-side encoded network parameter and a current encoded network parameter before updating includes: determining the Euclidean distance between the server coding network parameter and the current coding network parameter before the first coding network is updated; and determining the similarity according to the Euclidean distance.
Still taking the first client as an example, the current encoding network parameters before updating are determinedAnd server side encoded network parametersEuropean distance betweenAnd using Euclidean distanceTo determine the current encoded network parameters before updatingAnd server side encoded network parametersThe similarity between them. It follows that the current encoded network parameters before the updateAnd server side encoded network parametersThe smaller the Euclidean distance between the current coding network parameters before updatingAnd server side encoded network parametersThe higher the similarity between them; current encoded network parameters before updateAnd server side encoded network parametersThe larger the Euclidean distance between the current coding network parameters is, the current coding network parameters before updatingAnd server side encoded network parametersThe lower the similarity between them.
In a possible implementation manner, updating a current predicted network parameter corresponding to a first predicted network based on the similarity and a server predicted network parameter to obtain an updated first predicted network includes: under the condition that the similarity is smaller than the similarity threshold value, updating the current prediction network parameters into server side prediction network parameters to obtain an updated first prediction network; or, in the case that the similarity is greater than or equal to the similarity threshold, keeping the current predicted network parameter unchanged.
According to the current coding network parameter before updatingAnd server side encoded network parametersThe similarity between the network parameters can be dynamically selected by the following formula (1) whether to utilize the server side to predict the network parametersUpdating the current predicted network parameters corresponding to the first predicted network(the first predictive network was generated after the r-1 training round).
In the Euclidean distanceWhen the current coding network parameter is less than the threshold value mu, the current coding network parameter before updating is representedAnd takingService-side encoded network parametersThe similarity between the client and the server is high, namely the dispersion between the first coding network in the client and the server coding network in the server is small, so that the client can better learn knowledge in other clients by utilizing the server to predict network parametersUpdating the first prediction network, i.e. updating the current prediction network parameters corresponding to the first prediction networkUpdating forecast network parameters for serverIn the Euclidean distanceWhen the current coding network parameter is larger than or equal to the threshold value mu, the current coding network parameter before updating is representedAnd server side encoded network parametersThe similarity between the first prediction network and the second prediction network is low, that is, the dispersion between the first coding network in the client and the server coding network in the server is large, so that the current prediction network parameters corresponding to the first prediction network are kept to be more adaptive to the local characteristics of the clientAnd is not changed.
And performing local training of the r training round in the client based on the updated current encoding network parameters corresponding to the first encoding network and the updated current prediction network parameters corresponding to the first prediction network.
The process of performing local training for one training round in the client is described in detail below.
In one possible implementation, the local image dataset includes a plurality of target images; training the updated first coding network and the updated first prediction network based on the local image dataset to obtain a first coding network parameter corresponding to the first coding network and a first prediction network parameter corresponding to the first prediction network, including: respectively carrying out two different image transformation processes on each target image to obtain a first transformation image and a second transformation image corresponding to each target image; performing image processing on the first transformation image corresponding to each target image by using the updated first coding network and the updated first prediction network to obtain a first output vector corresponding to each target image; performing image processing on a second conversion image corresponding to each target image by using a second coding network to obtain a second output vector corresponding to each target image; determining a contrast training loss corresponding to each target image according to the first output vector and the second output vector corresponding to each target image; and adjusting the updated first coding network and the updated network parameters of the first prediction network according to the comparison training loss corresponding to each target image to obtain the first coding network parameters and the first prediction network parameters.
Since the local image dataset deployed in the client is unlabeled data, the client performs comparative training with the online network (including the first encoding network and the first prediction network) and the target network (including the second encoding network) in the twin network to achieve unsupervised network training on the unlabeled local image dataset.
Because the unsupervised network training in the client is a self-learning process, if a target image in the local image data set is directly and respectively input into the online network and the target network of the twin network, the twin network may not realize comparative learning, that is, the output of the online network and the target network is 0, and the training fails. Therefore, in order to enable the twin network to effectively implement contrast training, two different image transformation processes (for example, cropping, horizontal flipping, color transformation, etc.) are respectively performed on each target image in the local image data set, so as to obtain a first transformation image and a second transformation image corresponding to each target image. Then, the first transformation image and the second transformation image corresponding to each target image are respectively input into the online network and the target network, so that the input of the two networks is different, and the subsequent comparison training is effectively carried out.
In the local training process, network parameters of the first coding network and the first prediction network are continuously adjusted based on the comparison training loss of the online network and the target network until the first coding network parameters corresponding to the first coding network and the first prediction network parameters corresponding to the first prediction network, which are finally generated in the current training round, are obtained. In addition, in the local training process, after the network parameters of the first coding network are adjusted each time, the network parameters of the second coding network are updated according to the network parameters of the first coding network.
The comparative training process of the twin network is described in detail below. Still taking the above fig. 2 as an example, as shown in fig. 2, two different image transformation processes are respectively performed on an arbitrary target image x to obtain a first transformed image t and a second transformed image t' corresponding to the target image x. The first transformed image t is input to the online network, and the second transformed image t' is input to the target network.
In the online network, a first coding network and a first prediction network are utilized to perform image processing on the first transformed image t, so as to obtain a first output vector y corresponding to the target image x. And in the target network, performing image processing on the second converted image t 'by using a second coding network to obtain a second output vector y' corresponding to the target image x.
And determining the contrast training loss corresponding to the target image x according to the first output vector y and the second output vector y' corresponding to the target image x. For example, the contrast training loss L corresponding to the target image x may be determined based on the following formula (2):
by analogy, each target image in the local image data set can be determined, the corresponding comparison training loss in each comparison training process can be determined, and the network parameters of the first coding network and the first prediction network can be adjusted based on the corresponding comparison training loss in each comparison training process.
In a possible implementation manner, the network training method further includes: and updating the current coding network parameter corresponding to the second coding network by using the exponential moving average value of the first coding network parameter based on the preset weight.
For example, the current coding network parameter f corresponding to the second coding network may be updated based on the following formula (3)ξ:
fξ=mfξ+(1-m)fθ (3)。
Wherein f isθRepresenting the corresponding coding network parameters of the second coding network after each adjustment of the training loss. m is a preset weight.
The specific value of the preset weight may be set to be a larger value, for example, the preset weight is 0.99, so that the second coding network in the target network retains more historical data characteristics of the client during the training process.
Since the training process of the first coding network in the online network is continuously updated based on back propagation of the loss of training, the first coding network represents more of the latest data characteristics of the client. Because the first coding network represents the latest data characteristics of the client more, and the second coding network retains the historical data characteristics of the client more, in the joint training process, the communication protocol between the server and the client is set such that the client uploads the first coding network parameters corresponding to the first coding network and the first prediction network parameters corresponding to the first prediction network, which are obtained after local training, to the server without uploading the coding network parameters corresponding to the second coding network.
After receiving first coding network parameters and first prediction network parameters uploaded by a plurality of clients in a current training round, the server side updates the server side coding network by fusing the first coding network parameters uploaded by the plurality of clients to obtain updated server side coding network parameters of the current training round; and fusing the first prediction network parameters uploaded by the plurality of clients to update the server prediction network, so as to obtain the updated server prediction network parameters after the current training round. And the server side transmits the updated server side coding network parameters and server side prediction network parameters of the current training round to each client side again so as to execute local training of the next training round in each client side. And by analogy, the training process is executed in an iterative manner until the iterative training meets the preset training condition, and then the training is finished.
And the server side and the clients are combined to carry out iterative training so as to obtain the trained server side coding network in the server side and obtain the trained first coding network in each client. The trained server-side coding network and the first coding network can be used for image processing on the image to be processed, such as image recognition, image classification and the like. The trained server coding network obtained by the server is obtained by combining a plurality of clients for training, and has higher universality and robustness. The trained first coding network obtained by each client is obtained by training based on the local image data set, so that the method has higher individuation and higher processing precision when the client locally performs image processing.
Under the condition that the server-side coding network and the first coding network are feature learning networks, compared with the first coding network obtained by iterative training based on non-full-class data in the client, the server-side coding network in the server after the iterative training of the multiple clients has higher network characteristics, and can be applied to downstream tasks to perform image processing on images to be processed. For example, the server-side encoding network is applied to an image classification task to realize image classification of an image to be processed. For another example, the server-side coding network is applied to a scene segmentation task to realize scene segmentation of the image to be processed.
Fig. 3 shows a flow diagram of a network training method according to an embodiment of the present disclosure. The network training method can be executed by a server, and a server coding network and a server prediction network are deployed in the server. In some possible implementations, the network training method may be implemented by a server calling computer-readable instructions stored in a memory. As shown in fig. 3, the network training method may include:
in step S31, a first encoding network parameter and a first prediction network parameter sent by at least two clients are received, where each client has a first encoding network, a second encoding network, a first prediction network, and a local image dataset deployed therein, and the first encoding network parameter and the first prediction network parameter are obtained by training the first encoding network, the second encoding network, and the first prediction network based on the local image dataset.
In step S32, the server-side encoded network is updated according to the first encoded network parameter to obtain a server-side encoded network parameter, and the server-side predicted network is updated according to the first predicted network parameter to obtain a server-side predicted network parameter.
In step S33, the server-side encoded network parameters and the server-side predicted network parameters are sent to each client, where the server-side encoded network parameters and the server-side predicted network parameters are used to update the first encoded network parameters and the first predicted network parameters in each client.
In step S34, the above steps are iteratively performed until the iterative training satisfies a preset training condition, and the first coding network and/or the server coding network after the iterative training is used to perform image processing on the image to be processed.
Still taking the above fig. 2 as an example, as shown in fig. 2, the first client generates the first coding network parameter after local trainingAnd a first predicted network parameterSending the data to a server; the second client generates a first coding network parameter after local trainingAnd a first predicted network parameterAnd sending the data to a server. Service end to first encoding network parameterAndfusing to update the server coding network to obtain the updated server network parameter fφ(ii) a And the server side predicting the first network parameterAndfusing to update the server prediction network to obtain an updated server prediction network parameter pφ. The server-side carries out the network parameter f of the server-side againφAnd the server predicts the network parameter pφTo perform local training in the client for the next training round. The specific training process of the client is similar to the related description, and is not repeated here.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a network training apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any network training method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 4 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure. The network training device is applied to a client, wherein a first coding network, a second coding network, a first prediction network and a local image data set are deployed in the client. As shown in fig. 4, the network training device 40 includes:
the receiving module 41 is configured to receive a server-side encoding network parameter and a server-side prediction network parameter sent by a server, where the server is deployed with a server-side encoding network and a server-side prediction network, and the server-side encoding network parameter and the server-side prediction network parameter are obtained by training the server-side encoding network and the server-side prediction network by combining at least two clients;
the training module 42 is configured to train the first coding network, the second coding network, and the first prediction network based on the server-side coding network parameter, the server-side prediction network parameter, and the local image dataset, to obtain a first coding network parameter corresponding to the first coding network and a first prediction network parameter corresponding to the first prediction network;
a sending module 43, configured to send the first encoded network parameter and the first predicted network parameter to the server, where the first encoded network parameter and the first predicted network parameter are used to update the server encoded network parameter and the server predicted network parameter in the server;
and the iteration module 44 is configured to iteratively perform the above steps until the iterative training meets a preset training condition, where the first coding network and/or the server coding network after the iterative training is used to perform image processing on the image to be processed.
In one possible implementation, the training module 42 includes:
the first updating submodule is used for updating the current coding network parameters corresponding to the first coding network into the server coding network parameters to obtain an updated first coding network;
the first determining submodule is used for determining the similarity between the server coding network parameter and the current coding network parameter before the first coding network is updated;
the second updating submodule is used for updating the current prediction network parameters corresponding to the first prediction network based on the similarity and the server side prediction network parameters to obtain an updated first prediction network;
and the training submodule is used for training the updated first coding network and the updated first prediction network based on the local image data set to obtain a first coding network parameter and a first prediction network parameter.
In a possible implementation manner, the first determining submodule is specifically configured to:
determining the Euclidean distance between the server side coding network parameter and the current coding network parameter before updating;
and determining the similarity according to the Euclidean distance.
In a possible implementation manner, the second update submodule is specifically configured to:
under the condition that the similarity is smaller than the similarity threshold value, updating the current prediction network parameters into server side prediction network parameters to obtain an updated first prediction network; or,
and keeping the current predicted network parameters unchanged under the condition that the similarity is greater than or equal to the similarity threshold value.
In one possible implementation, the local image dataset includes a plurality of target images;
a training submodule, specifically configured to:
respectively carrying out two different image transformation processes on each target image to obtain a first transformation image and a second transformation image corresponding to each target image;
performing image processing on the first transformation image corresponding to each target image by using the updated first coding network and the updated first prediction network to obtain a first output vector corresponding to each target image;
performing image processing on a second conversion image corresponding to each target image by using a second coding network to obtain a second output vector corresponding to each target image;
determining a contrast training loss corresponding to each target image according to the first output vector and the second output vector corresponding to each target image;
and adjusting the updated first coding network and the updated network parameters of the first prediction network according to the comparison training loss corresponding to each target image to obtain the first coding network parameters and the first prediction network parameters.
In a possible implementation manner, the network training apparatus 40 further includes:
and the updating module is used for updating the current coding network parameter corresponding to the second coding network by using the exponential moving average value of the first coding network parameter based on the preset weight.
Fig. 5 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure. The network training device is applied to a server side, and a server side coding network and a server side prediction network are deployed in the server side. As shown in fig. 5, the network training apparatus 50 includes:
a receiving module 51, configured to receive a first coding network parameter and a first prediction network parameter sent by at least two clients, where each client is respectively deployed with a first coding network, a second coding network, a first prediction network, and a local image dataset, and the first coding network parameter and the first prediction network parameter are obtained by training the first coding network, the second coding network, and the first prediction network based on the local image dataset;
an updating module 52, configured to update the server-side coding network according to the first coding network parameter to obtain a server-side coding network parameter, and update the server-side prediction network according to the first prediction network parameter to obtain a server-side prediction network parameter;
a sending module 53, configured to send the server-side encoded network parameter and the server-side predicted network parameter to each client, where the server-side encoded network parameter and the server-side predicted network parameter are used to update the first encoded network parameter and the first predicted network parameter in each client;
and the iteration module 54 is configured to iteratively perform the above steps until the iterative training meets a preset training condition, and the server-side coding network and/or the first coding network after the iterative training is used to perform image processing on the image to be processed.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 6, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 7 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure. As shown in fig. 7, the electronic device 1900 may be provided as a server. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (11)
1. A network training method is applied to a client, wherein a first coding network, a second coding network, a first prediction network and a local image data set are deployed in the client, and the method comprises the following steps:
receiving a server coding network parameter and a server prediction network parameter sent by a server, wherein a server coding network and a server prediction network are deployed in the server, and the server coding network parameter and the server prediction network parameter are obtained by training the server on the server coding network and the server prediction network by combining at least two clients;
training the first coding network, the second coding network and the first prediction network based on the server-side coding network parameters, the server-side prediction network parameters and the local image data set to obtain first coding network parameters corresponding to the first coding network and first prediction network parameters corresponding to the first prediction network;
sending the first coding network parameter and the first prediction network parameter to the server, wherein the first coding network parameter and the first prediction network parameter are used for updating the server coding network parameter and the server prediction network parameter in the server;
and iteratively executing the steps until iterative training meets a preset training condition, wherein the first coding network and/or the server coding network after iterative training is used for carrying out image processing on the image to be processed.
2. The method of claim 1, wherein the training the first coding network, the second coding network, and the first prediction network based on the server-side coding network parameter, the server-side prediction network parameter, and the local image dataset to obtain a first coding network parameter corresponding to the first coding network and a first prediction network parameter corresponding to the first prediction network comprises:
updating the current coding network parameter corresponding to the first coding network into the server coding network parameter to obtain the updated first coding network;
determining the similarity between the server-side coding network parameter and the current coding network parameter before the first coding network is updated;
updating the current predicted network parameters corresponding to the first predicted network based on the similarity and the server predicted network parameters to obtain the updated first predicted network;
training the updated first coding network and the updated first prediction network based on the local image dataset to obtain the first coding network parameter and the first prediction network parameter.
3. The method of claim 2, wherein the determining the similarity between the server-side encoded network parameter and the current encoded network parameter before the first encoded network update comprises:
determining the Euclidean distance between the server coding network parameter and the current coding network parameter before the first coding network is updated;
and determining the similarity according to the Euclidean distance.
4. The method according to claim 3, wherein the updating a current predicted network parameter corresponding to the first predicted network based on the similarity and the server-side predicted network parameter to obtain an updated first predicted network comprises:
under the condition that the similarity is smaller than a similarity threshold value, updating the current prediction network parameter to the server side prediction network parameter to obtain an updated first prediction network; or,
keeping the current predicted network parameter unchanged if the similarity is greater than or equal to the similarity threshold.
5. The method of any of claims 2 to 4, wherein the local image dataset comprises a plurality of target images;
the training the updated first coding network and the updated first prediction network based on the local image dataset to obtain the first coding network parameter and the first prediction network parameter includes:
respectively carrying out two different image transformation processes on each target image to obtain a first transformation image and a second transformation image corresponding to each target image;
performing image processing on the first transformed image corresponding to each target image by using the updated first coding network and the updated first prediction network to obtain a first output vector corresponding to each target image;
performing image processing on the second transformed image corresponding to each target image by using the second coding network to obtain a second output vector corresponding to each target image;
determining a contrast training loss corresponding to each target image according to the first output vector and the second output vector corresponding to each target image;
and adjusting the updated first coding network and the updated network parameters of the first prediction network according to the comparison training loss corresponding to each target image to obtain the first coding network parameters and the first prediction network parameters.
6. The method of claim 5, further comprising:
and updating the current coding network parameter corresponding to the second coding network by using the first coding network parameter based on a preset weight.
7. A network training method is applied to a server, wherein a server coding network and a server prediction network are deployed in the server, and the method comprises the following steps:
receiving a first coding network parameter and a first prediction network parameter sent by at least two clients, wherein each client is respectively deployed with a first coding network, a second coding network, a first prediction network and a local image data set, and the first coding network parameter and the first prediction network parameter are obtained by training the first coding network, the second coding network and the first prediction network based on the local image data set;
updating the server side coding network according to the first coding network parameter to obtain a server side coding network parameter, and updating the server side prediction network according to the first prediction network parameter to obtain a server side prediction network parameter;
sending the server-side encoded network parameters and the server-side predicted network parameters to each of the clients, wherein the server-side encoded network parameters and the server-side predicted network parameters are used for updating the first encoded network parameters and the first predicted network parameters in each of the clients;
and iteratively executing the steps until iterative training meets a preset training condition, wherein the server coding network and/or the first coding network after iterative training is used for carrying out image processing on the image to be processed.
8. A network training apparatus applied to a client, in which a first coding network, a second coding network, a first prediction network and a local image dataset are deployed, the apparatus comprising:
the server side coding network parameter and the server side prediction network parameter are obtained by the server side combining at least two clients to train the server side coding network and the server side prediction network;
a training module, configured to train the first coding network, the second coding network, and the first prediction network based on the server-side coding network parameter, the server-side prediction network parameter, and the local image dataset, to obtain a first coding network parameter corresponding to the first coding network and a first prediction network parameter corresponding to the first prediction network;
a sending module, configured to send the first encoded network parameter and the first predicted network parameter to the server, where the first encoded network parameter and the first predicted network parameter are used to update the server encoded network parameter and the server predicted network parameter in the server;
and the iteration module is used for iteratively executing the steps until the iterative training meets a preset training condition, and the first coding network and/or the server coding network after the iterative training is used for carrying out image processing on the image to be processed.
9. A network training device is applied to a server, wherein a server coding network and a server prediction network are deployed in the server, and the device comprises:
the system comprises a receiving module, a prediction module and a prediction module, wherein the receiving module is used for receiving a first coding network parameter and a first prediction network parameter sent by at least two clients, a first coding network, a second coding network, a first prediction network and a local image data set are respectively deployed in each client, and the first coding network parameter and the first prediction network parameter are obtained by training the first coding network, the second coding network and the first prediction network based on the local image data set;
the updating module is used for updating the server side coding network according to the first coding network parameter to obtain a server side coding network parameter, and updating the server side prediction network according to the first prediction network parameter to obtain a server side prediction network parameter;
a sending module, configured to send the server-side encoded network parameter and the server-side predicted network parameter to each client, where the server-side encoded network parameter and the server-side predicted network parameter are used to update the first encoded network parameter and the first predicted network parameter in each client;
and the iteration module is used for iteratively executing the steps until the iterative training meets a preset training condition, and the server-side coding network and/or the first coding network after the iterative training is used for carrying out image processing on the image to be processed.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any of claims 1 to 7.
11. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110737722.5A CN113344131A (en) | 2021-06-30 | 2021-06-30 | Network training method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110737722.5A CN113344131A (en) | 2021-06-30 | 2021-06-30 | Network training method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113344131A true CN113344131A (en) | 2021-09-03 |
Family
ID=77481907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110737722.5A Pending CN113344131A (en) | 2021-06-30 | 2021-06-30 | Network training method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344131A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113988225A (en) * | 2021-12-24 | 2022-01-28 | 支付宝(杭州)信息技术有限公司 | Method and device for establishing representation extraction model, representation extraction and type identification |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103502899A (en) * | 2011-01-26 | 2014-01-08 | 谷歌公司 | Dynamic predictive modeling platform |
CN110263921A (en) * | 2019-06-28 | 2019-09-20 | 深圳前海微众银行股份有限公司 | A kind of training method and device of federation's learning model |
CN111291897A (en) * | 2020-02-10 | 2020-06-16 | 深圳前海微众银行股份有限公司 | Semi-supervision-based horizontal federal learning optimization method, equipment and storage medium |
US20200295936A1 (en) * | 2017-11-02 | 2020-09-17 | nChain Holdings Limited | Computer-implemented systems and methods for linking a blockchain to a digital twin |
WO2020229684A1 (en) * | 2019-05-16 | 2020-11-19 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concepts for federated learning, client classification and training data similarity measurement |
CN112101404A (en) * | 2020-07-24 | 2020-12-18 | 西安电子科技大学 | Image classification method and system based on generation countermeasure network and electronic equipment |
CN112101403A (en) * | 2020-07-24 | 2020-12-18 | 西安电子科技大学 | Method and system for classification based on federate sample network model and electronic equipment |
CN112598643A (en) * | 2020-12-22 | 2021-04-02 | 百度在线网络技术(北京)有限公司 | Depth counterfeit image detection and model training method, device, equipment and medium |
-
2021
- 2021-06-30 CN CN202110737722.5A patent/CN113344131A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103502899A (en) * | 2011-01-26 | 2014-01-08 | 谷歌公司 | Dynamic predictive modeling platform |
US20200295936A1 (en) * | 2017-11-02 | 2020-09-17 | nChain Holdings Limited | Computer-implemented systems and methods for linking a blockchain to a digital twin |
WO2020229684A1 (en) * | 2019-05-16 | 2020-11-19 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Concepts for federated learning, client classification and training data similarity measurement |
CN110263921A (en) * | 2019-06-28 | 2019-09-20 | 深圳前海微众银行股份有限公司 | A kind of training method and device of federation's learning model |
CN111291897A (en) * | 2020-02-10 | 2020-06-16 | 深圳前海微众银行股份有限公司 | Semi-supervision-based horizontal federal learning optimization method, equipment and storage medium |
CN112101404A (en) * | 2020-07-24 | 2020-12-18 | 西安电子科技大学 | Image classification method and system based on generation countermeasure network and electronic equipment |
CN112101403A (en) * | 2020-07-24 | 2020-12-18 | 西安电子科技大学 | Method and system for classification based on federate sample network model and electronic equipment |
CN112598643A (en) * | 2020-12-22 | 2021-04-02 | 百度在线网络技术(北京)有限公司 | Depth counterfeit image detection and model training method, device, equipment and medium |
Non-Patent Citations (1)
Title |
---|
GRILL, JEAN-BASTIEN, ET AL.: "Bootstrap your own latent-a new approach to self-supervised learning", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33 (2020)》, pages 1 - 14 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113988225A (en) * | 2021-12-24 | 2022-01-28 | 支付宝(杭州)信息技术有限公司 | Method and device for establishing representation extraction model, representation extraction and type identification |
CN113988225B (en) * | 2021-12-24 | 2022-05-06 | 支付宝(杭州)信息技术有限公司 | Method and device for establishing representation extraction model, representation extraction and type identification |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109800737B (en) | Face recognition method and device, electronic equipment and storage medium | |
CN112001321B (en) | Network training method, pedestrian re-identification method, device, electronic equipment and storage medium | |
CN111462268B (en) | Image reconstruction method and device, electronic equipment and storage medium | |
CN110287874B (en) | Target tracking method and device, electronic equipment and storage medium | |
CN109658352B (en) | Image information optimization method and device, electronic equipment and storage medium | |
CN109766954B (en) | Target object processing method and device, electronic equipment and storage medium | |
CN107944409B (en) | Video analysis method and device capable of distinguishing key actions | |
CN108063773B (en) | Application service access method and device based on mobile edge computing | |
CN109711546B (en) | Neural network training method and device, electronic equipment and storage medium | |
CN109165738B (en) | Neural network model optimization method and device, electronic device and storage medium | |
CN111340731B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110458218B (en) | Image classification method and device and classification network training method and device | |
CN110909861B (en) | Neural network optimization method and device, electronic equipment and storage medium | |
CN111242303A (en) | Network training method and device, and image processing method and device | |
CN111310664B (en) | Image processing method and device, electronic equipment and storage medium | |
CN113259583B (en) | Image processing method, device, terminal and storage medium | |
CN111311588B (en) | Repositioning method and device, electronic equipment and storage medium | |
CN110135349A (en) | Recognition methods, device, equipment and storage medium | |
CN110750226A (en) | Central control equipment management method and device, computer equipment and storage medium | |
CN110415258B (en) | Image processing method and device, electronic equipment and storage medium | |
CN111325786B (en) | Image processing method and device, electronic equipment and storage medium | |
CN109447258B (en) | Neural network model optimization method and device, electronic device and storage medium | |
CN110750961A (en) | File format conversion method and device, computer equipment and storage medium | |
CN111988622B (en) | Video prediction method and device, electronic equipment and storage medium | |
CN113344131A (en) | Network training method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210903 |