CN116543210A - Medical image classification method based on federal learning and attention mechanism - Google Patents

Medical image classification method based on federal learning and attention mechanism Download PDF

Info

Publication number
CN116543210A
CN116543210A CN202310503180.4A CN202310503180A CN116543210A CN 116543210 A CN116543210 A CN 116543210A CN 202310503180 A CN202310503180 A CN 202310503180A CN 116543210 A CN116543210 A CN 116543210A
Authority
CN
China
Prior art keywords
model
medical image
image classification
attention
model parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310503180.4A
Other languages
Chinese (zh)
Inventor
郭艳卿
刘冠初
付海燕
何浩
王湾湾
刘航
李祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dongjian Intelligent Technology Co ltd
Dalian University of Technology
Original Assignee
Shenzhen Dongjian Intelligent Technology Co ltd
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dongjian Intelligent Technology Co ltd, Dalian University of Technology filed Critical Shenzhen Dongjian Intelligent Technology Co ltd
Priority to CN202310503180.4A priority Critical patent/CN116543210A/en
Publication of CN116543210A publication Critical patent/CN116543210A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention provides a medical image classification method based on federal learning and attention mechanism, which comprises the following steps: s1, acquiring medical image data to be processed; s2, inputting the medical image data to be processed into a trained medical image classification model, and acquiring a classification result output by the medical image classification model as a medical image classification result, wherein the medical image classification model is a ResNet-50 neural network model for introducing a channel attention mechanism and is obtained based on federal learning training. The invention is based on a ResNet-50 neural network model, introduces a channel attention mechanism, fully digs potential connection between the convolutional neural network and the attention mechanism, and simultaneously uses a federal learning framework to perform multiparty data joint training deep learning model, and the model can ensure that private data of a user is not revealed.

Description

Medical image classification method based on federal learning and attention mechanism
Technical Field
The invention relates to the technical field of computer vision, in particular to a medical image classification method based on a federal learning and attention mechanism.
Background
Correlation of computer vision is greatly improved after the advent of deep learning, which requires extensive data input for training network models. The medical image has the characteristics of privacy, small quantity and difficult acquisition, so that model training related to the medical image has great difficulty. Considering the privacy characteristics of medical images, the safety of a medical image data source needs to be ensured when the medical images are used for learning and working; in view of the characteristics of the small quantity and the difficult availability of medical images and the large data volume requirement required by the deep learning model, model training by combining multiple parties has become a necessary method. On the premise of protecting personal privacy, a disease prediction model can be trained by a plurality of hospitals or medical institutions together so as to improve the disease diagnosis accuracy.
At present, aiming at medical image classification, a pure convolution model or a self-attention model is mostly adopted, and potential connection between a convolution neural network and an attention mechanism is ignored, so that the classification effect is poor.
Disclosure of Invention
According to the defects of the prior art, the invention provides a medical image classification method based on federal learning and attention mechanisms, which fully digs potential connection between a convolutional neural network and the attention mechanisms, and simultaneously carries out classification model training based on federal learning to obtain a classification model with better classification effect on the premise of protecting user privacy.
The invention adopts the following technical means:
a medical image classification method based on federal learning and attention mechanisms, comprising the steps of:
s1, acquiring medical image data to be processed;
s2, inputting the medical image data to be processed into a trained medical image classification model, and acquiring a classification result output by the medical image classification model as a medical image classification result, wherein the medical image classification model is a ResNet-50 neural network model for introducing a channel attention mechanism and is obtained based on federal learning training.
Further, the training steps of the medical image classification model are as follows:
s100, issuing initialization model parameters by a central server, wherein the model is used for classifying medical images;
s200, each participant receives the initialized model parameters and takes the initialized model parameters as local model parameters;
s300, each participant trains a local model based on local data and generates new local model parameters;
s400, uploading each new local model parameter to a central server by each participant, and carrying out aggregation treatment on each model parameter by the central server to generate an aggregation model parameter;
s500, the central server transmits the aggregation model parameters to all the participants, and all the participants update the local model parameters by utilizing the aggregation model parameters;
and S600, each participant judges whether the local model reaches the preset iteration round number or converges based on the current local model parameters, if so, the participant saves the current local model parameters, otherwise, the S300 is returned.
Further, the ResNet-50 neural network model for introducing the channel attention mechanism comprises a STAGE0, a STRAGE1, a STAGE2, a STRAGE3 and a STAGE4 which are sequentially connected, wherein the STRAGE1, the STAGE2, the STRAGE3 and the STAGE4 are residual networks;
wherein STRAGE1 comprises one BK1 and two BK2, wherein BK1 represents BottleNeck with different numbers of input and output channels, BK2 represents BottleNeck with the same number of input and output channels;
STRAGE2 includes one BK1 and three BK2;
STRAGE1 includes one BK1 and five BK2;
STRAGE1 includes one BK1 and two BK2.
Compared with the prior art, the invention has the following advantages:
(1) According to the invention, a solution is provided for the problem of data island, for the Hash of sensitive data, the performance of a local Hash model is difficult to improve by utilizing the data of other mechanisms, and based on the invention, the mechanism can reach the model of improving the performance of the local Hash model under the condition of not carrying out data transmission;
(2) The risk of privacy disclosure is reduced, the transfer of the model is much safer than the transfer of data, and the problem of privacy disclosure in the data transmission process can be better avoided;
(3) Communication resources are saved, compared with a large amount of data, the model occupies much less storage resources, and communication loss is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
FIG. 1 is a framework of a lateral Federal learning system.
Fig. 2 is a network structure of res net in the present invention.
Fig. 3 is a schematic diagram of the channel attention mechanism.
Fig. 4 is a schematic diagram of the channel attention mechanism of the present invention.
FIG. 5 is a flow chart of the training of the medical image classification model according to the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The invention provides a medical image classification method based on federal learning and attention mechanism, which comprises the following steps:
s1, acquiring medical image data to be processed;
s2, inputting the medical image data to be processed into a trained medical image classification model, and acquiring a classification result output by the medical image classification model as a medical image classification result, wherein the medical image classification model is a ResNet-50 neural network model for introducing a channel attention mechanism and is obtained based on federal learning training. Further, the training steps of the medical image classification model are as follows:
s100, issuing initialization model parameters by a central server, wherein the model is used for classifying medical images;
s200, each participant receives the initialized model parameters and takes the initialized model parameters as local model parameters;
s300, each participant trains a local model based on local data and generates new local model parameters;
s400, uploading each new local model parameter to a central server by each participant, and carrying out aggregation treatment on each model parameter by the central server to generate an aggregation model parameter;
s500, the central server transmits the aggregation model parameters to all the participants, and all the participants update the local model parameters by utilizing the aggregation model parameters;
and S600, each participant judges whether the local model reaches the preset iteration round number or converges based on the current local model parameters, if so, the participant saves the current local model parameters, otherwise, the S300 is returned.
As shown in fig. 1, a round of iterative process of lateral federal learning consists of the following steps:
1. each participant trains locally, calculates local model parameters (e.g., gradients, neural network weights, etc.), encrypts the parameters, and then sends the parameters to a central server.
2. And the server aggregates the parameters of all the participants to obtain weighted model parameters after the joint training.
3. And the cloud server sends the aggregated parameter results to each participant.
4. Each participant receives the aggregated model parameters and updates the local model parameters.
The invention not only applies the transverse federal learning framework, but also considers the channel attention mechanism module introduced by the local model during training, fully combines the advantages of the convolutional neural network and the attention mechanism, and improves the accuracy of the model while hardly increasing the computational complexity.
As shown in FIG. 2, the deep learning network ResNet-50 based on the invention carries out medical image classification, and has the characteristics of simple network structure, high network model depth and higher training speed. The invention takes the transverse federal learning as a basic framework, introduces a channel attention mechanism module on the basis of the ResNet-50 network to strengthen the receptive field of the ResNet-50 network, and effectively improves the image classification performance of the federal ResNet model. The backbone network of ResNet-50 is shown in FIG. 2, where CONV is an abbreviation for Convolution (Convolume), 7×7 refers to the size of the Convolution kernel, and 64 refers to the number of Convolution kernels, i.e., the number of channels or feature patterns output by the Convolution layer; s refers to the step length of the convolution kernel as S; BN is an abbreviation for Batch Normalization, BN layer; RELU refers to RELU activation functions; BK is an abbreviation for BottleNeck, and 2 Bottleneck on the right side of FIG. 2 correspond to 2 cases, respectively: the number of input and output channels is the same, BK2, and the number of input and output channels is different, BK1. The pixel size of the input image after data preprocessing is 224 multiplied by 3, the input image enters a first layer convolution layer conv1 for operation, the input image is output as 112 multiplied by 64 after the convolution kernel operation of 7 multiplied by 7, and then enters a STAGE1-STRAGE4 residual network operation, and the output is 7 multiplied by 2048. After all convolution operations are operated, average pooling operation is carried out, then all connection layers of 1000 neurons are accessed, and finally a Softmax function is used for obtaining probability values of different kinds of image targets. Where BN is an abbreviation for layer Batch Normalization, 2 boltlenecks correspond to 2 cases respectively: the number of input and output channels is the same (Bottleneck 2), and the number of input and output channels is different (Bottleneck).
In the invention, STAGE1-STRAGE4 in a backbone network of ResNet-50 is residual operation, BN is abbreviation of a Batchnormalization layer, BK is abbreviation of Bottleneck, the corresponding input and output channels in a BK1 structure are different, BK2 is the same in number before and after input, C is the number of input channels, C1 is the number of channels of a left convolution layer in the BK structure, H, W is the height and width of an input feature respectively, S is the step length of a convolution kernel, the attention degree of the important feature of the neural network is enhanced, a channel attention mechanism is introduced in the BK structure, and RA represents the attention part combined with the convolution neural network.
Furthermore, the present invention introduces a mechanism of attention in the model. A standard channel self-attention module with N heads is shown in figure 3. The self-attention path gathers the intermediate features into N groups, each group containing 3 features, one from a 1 x 1 convolution. The corresponding three feature maps, as query, key and value, follow the traditional multi-headed self-attention module.
As shown in fig. 4, the three 1×1 convolution check feature graphs are first used to perform convolution operation, and the channels of the feature graphs are divided into C/N to simulate the N heads of the attention mechanism, and obtain the query, key and value of the attention. For each head, find the attention weight, connect N weights, output size H WThe weight of XC is denoted as W 1 . For the convolution part, a convolution kernel of 1 multiplied by 1 is still adopted, the feature map is subjected to convolution operation, and the output result is marked as W 2 The output result of the module is alpha W 1 +βW 2
The Attention mechanism is a mechanism for allocating resources, namely, the resources are allocated to the resources which are originally allocated evenly according to the importance degree of the Attention object, the important units are more than one point, the unimportant or bad units are less than one point, and in the structural design of the deep neural network, the resources which are allocated by the Attention are mostly weights.
The calculation process of the Attention in the neural network is as follows:
set input characteristicsOutput characteristics->C in And C out Is the input and output channel size. />g ij ∈R Cout Representing the tensor of F corresponding to pixel (i, j) of G, this attention module outputs as:
where is the standard self-attention module for N heads,is the local area of the pixel shown in queries, keys, values, whose spatial extent k is centered on (i, j), and +.>Is about N k The corresponding attention weights of the features within (i, j).
The widely used self-attention module, attention weights are calculated as follows:
wherein d isIs a feature of the (c) wafer. In summary, the multi-headed channel self-attention mechanism can be expressed as two phases:
Ⅰ:
II:
the following describes the solution and effects of the present invention by a specific application example.
Federal medical treatment is a method of storing medical data in a distributed system in different organizations or institutions, thereby protecting medical privacy and increasing data sharing. The following are some federal medical application examples:
assuming that a plurality of medical institutions hold medical image data of partial patients, but the data they have has certain characteristics, for example, medical institution a has certain specific skin disease medical data mainly distributed in the age range of 10-30 years old, under the same kind of disease data, the patient age of medical institution B mainly distributed in the age range of 25-40 years old, and medical institution C mainly distributed in the age range of 30-50 years old, we need to study the skin disease today, explore the relationship between the morbidity and the age, but the medical data has the attribute of privacy protection, three medical institutions cannot communicate data, but the model effect trained by only one medical institution is poor, and does not accord with the general rule, and it is difficult to obtain the relationship between the morbidity and the age. At present, the countries go up to privacy protection act successively, and the traditional solution of modeling data together is not feasible. According to the invention, on the premise of ensuring that local data of the users are not revealed, a more efficient learning model is built by combining a plurality of medical institutions, and the algorithm provided by the invention can be used for training a medical image classification algorithm with the performance superior to that of a model obtained by locally training a participant by using data set samples of a plurality of users.
The following is an implementation process of the algorithm of the invention: firstly, medical institutions a, B and C (or more participants, here for convenience of explanation, assuming that the participants are A, B, C three parties) sort out medical data available for joint modeling, after confirming a model to be trained by a trusted third party central server, the medical institutions a, B and C broadcast and send initialization model parameters to each participant, and after receiving the initialization model, the medical institutions a, B and C train out a local medical image classification model by using their own local medical data sets. After the prescribed local model iteration rounds are completed, the local training obtained model parameters are uploaded to a central server, and global model aggregation operation is carried out, namely, the medical institution A, the medical institution B and the medical institution C upload the trained local models to a trusted third-party server at the same time. The communication between the medical institution participating in the training and the central server only carries out the transmission of the model parameters, so that the problem of user data leakage does not exist in the process. After the central server receives the local models transmitted by all users, model aggregation and model issuing work is carried out, and the global iteration is finished. The above steps are repeated until the model converges or reaches the discussion of the specified iteration.
The specific model training process is shown in fig. 5, and includes:
(1) Model broadcast: first the central server will send the initialized or up-to-date global model parameters to all participants. If the first round of training starts, the initialized model parameters are sent, otherwise, the latest global model parameters are sent.
(2) The selection of the participants: the central server will randomly extract from all participants the part of the participants that has met the training requirements. Randomly selecting participants in a first global training; in the subsequent rounds, only when the participant has completed all local training rounds in the last global update round and is in an idle state, may be selected to participate in the training step in the next global update round.
(3) Participant local update: after receiving the global model parameters issued by the central server, each participant performs a local update step on the private local data set, specifically, performs an optimization process on the received parameters by a random gradient descent method.
(4) Participant upload model: when the participant performs the local updating step, the locally updated model parameters are uploaded to the central server, and it is emphasized that in this process, the uploaded information is only the model parameters, and does not include any original data of the participant.
(5) Central server aggregation model: after the central server receives the model parameters uploaded by all the participants participating in the training in the global round, the central server omits to aggregate the model parameters of all the participants by using a weighted average policy, and finally obtains the global model parameters of a new round. The above is a global update round, the above steps are cycled until the upper limit of the allowed global update round is reached, and finally all participants share the final model parameters.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (5)

1. A medical image classification method based on federal learning and attention mechanisms, comprising the steps of:
s1, acquiring medical image data to be processed;
s2, inputting the medical image data to be processed into a trained medical image classification model, and acquiring a classification result output by the medical image classification model as a medical image classification result, wherein the medical image classification model is a ResNet-50 neural network model for introducing a channel attention mechanism and is obtained based on federal learning training.
2. A medical image classification method based on federal learning and attention mechanisms according to claim 1, wherein the training step of the medical image classification model is as follows:
s100, issuing initialization model parameters by a central server, wherein the model is used for classifying medical images;
s200, each participant receives the initialized model parameters and takes the initialized model parameters as local model parameters;
s300, each participant trains a local model based on local data and generates new local model parameters;
s400, uploading each new local model parameter to a central server by each participant, and carrying out aggregation treatment on each model parameter by the central server to generate an aggregation model parameter;
s500, the central server transmits the aggregation model parameters to all the participants, and all the participants update the local model parameters by utilizing the aggregation model parameters;
and S600, each participant judges whether the local model reaches the preset iteration round number or converges based on the current local model parameters, if so, the participant saves the current local model parameters, otherwise, the S300 is returned.
3. The medical image classification method based on federal learning and attention mechanism according to claim 1, wherein the res net-50 neural network model of the channel attention mechanism includes STAGE0, STAGE1, STAGE2, STAGE 3, and STAGE4 connected in sequence, and the STAGEs 1, STAGE2, STAGE 3, and STAGE4 are residual networks;
wherein STRAGE1 comprises one BK1 and two BK2, wherein BK1 represents BottleNeck with different numbers of input and output channels, BK2 represents BottleNeck with the same number of input and output channels;
STRAGE2 includes one BK1 and three BK2;
STRAGE1 includes one BK1 and five BK2;
STRAGE1 includes one BK1 and two BK2.
4. A medical image classification method based on federal learning and Attention mechanisms according to claim 3, wherein the calculating process of the Attention in the neural network comprises:
set input characteristicsOutput characteristics->C in And C out Is the input and output channel size.g ij ∈R Cout Representing the tensor of F corresponding to pixel (i, j) of G, the attention module outputs as:
where is the standard self-attention module for N heads,is the local area of the pixel shown in queries, keys, values, whose spatial extent k is centered on (i, j), is->Is about N k The corresponding attention weights of the features within (i, j).
5. The medical image classification method based on federal learning and Attention mechanism of claim 4, wherein the Attention calculation process in the neural network further comprises calculating the Attention weight according to the following method:
wherein d isIs a feature of the (c) wafer.
CN202310503180.4A 2023-05-06 2023-05-06 Medical image classification method based on federal learning and attention mechanism Pending CN116543210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310503180.4A CN116543210A (en) 2023-05-06 2023-05-06 Medical image classification method based on federal learning and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310503180.4A CN116543210A (en) 2023-05-06 2023-05-06 Medical image classification method based on federal learning and attention mechanism

Publications (1)

Publication Number Publication Date
CN116543210A true CN116543210A (en) 2023-08-04

Family

ID=87449972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310503180.4A Pending CN116543210A (en) 2023-05-06 2023-05-06 Medical image classification method based on federal learning and attention mechanism

Country Status (1)

Country Link
CN (1) CN116543210A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036830A (en) * 2023-10-07 2023-11-10 之江实验室 Tumor classification model training method and device, storage medium and electronic equipment
CN117350373A (en) * 2023-11-30 2024-01-05 艾迪恩(山东)科技有限公司 Personalized federal aggregation algorithm based on local self-attention mechanism

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036830A (en) * 2023-10-07 2023-11-10 之江实验室 Tumor classification model training method and device, storage medium and electronic equipment
CN117036830B (en) * 2023-10-07 2024-01-09 之江实验室 Tumor classification model training method and device, storage medium and electronic equipment
CN117350373A (en) * 2023-11-30 2024-01-05 艾迪恩(山东)科技有限公司 Personalized federal aggregation algorithm based on local self-attention mechanism
CN117350373B (en) * 2023-11-30 2024-03-01 艾迪恩(山东)科技有限公司 Personalized federal aggregation algorithm based on local self-attention mechanism

Similar Documents

Publication Publication Date Title
CN116543210A (en) Medical image classification method based on federal learning and attention mechanism
Yu et al. Deep-learning-empowered breast cancer auxiliary diagnosis for 5GB remote E-health
Malach et al. Proving the lottery ticket hypothesis: Pruning is all you need
Chen et al. Federated learning based mobile edge computing for augmented reality applications
CN110458765B (en) Image quality enhancement method based on perception preserving convolution network
CN108171663B (en) Image filling system of convolutional neural network based on feature map nearest neighbor replacement
WO2021051987A1 (en) Method and apparatus for training neural network model
Li et al. Fedtp: Federated learning by transformer personalization
CN113554654B (en) Point cloud feature extraction system and classification segmentation method based on graph neural network
CN113850272A (en) Local differential privacy-based federal learning image classification method
Jiang et al. Federated learning algorithm based on knowledge distillation
CN112861659B (en) Image model training method and device, electronic equipment and storage medium
CN111915545A (en) Self-supervision learning fusion method of multiband images
CN112085745A (en) Retinal vessel image segmentation method of multi-channel U-shaped full convolution neural network based on balanced sampling splicing
CN116664930A (en) Personalized federal learning image classification method and system based on self-supervision contrast learning
CN117236421B (en) Large model training method based on federal knowledge distillation
CN113327191B (en) Face image synthesis method and device
KR102444449B1 (en) Distributed parallel deep learning system, server and method
CN115186831A (en) Deep learning method with efficient privacy protection
CN108765540A (en) A kind of heavy illumination method based on image and integrated study
CN116051849A (en) Brain network data feature extraction method and device
CN114997374A (en) Rapid and efficient federal learning method for data inclination
Mao et al. A novel user membership leakage attack in collaborative deep learning
CN117350373B (en) Personalized federal aggregation algorithm based on local self-attention mechanism
CN117575044A (en) Data forgetting learning method, device, data processing system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination