CN114897837A - Power inspection image defect detection method based on federal learning and self-adaptive difference - Google Patents

Power inspection image defect detection method based on federal learning and self-adaptive difference Download PDF

Info

Publication number
CN114897837A
CN114897837A CN202210530713.3A CN202210530713A CN114897837A CN 114897837 A CN114897837 A CN 114897837A CN 202210530713 A CN202210530713 A CN 202210530713A CN 114897837 A CN114897837 A CN 114897837A
Authority
CN
China
Prior art keywords
client
model
model parameters
local
central server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210530713.3A
Other languages
Chinese (zh)
Inventor
李刚
张运涛
孟坤
张曦月
贺帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN202210530713.3A priority Critical patent/CN114897837A/en
Publication of CN114897837A publication Critical patent/CN114897837A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Water Supply & Treatment (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a power inspection image defect detection method based on federal learning and self-adaptive difference, which comprises the following steps: the method comprises the steps of building a federal learning scene, preprocessing data, issuing global model parameters by a central server, training model parameters by a client, uploading local model parameters, weighting and aggregating the client model parameters collected by the central server, calculating the total model loss of the server, and updating the client model parameters. On the premise of privacy protection and data safety sharing, the method ensures that all participants perform collaborative training, and ensures the high precision of the power inspection image defect detection; on the basis of federal learning, a differential privacy technology is introduced, the data storage and model training stage of deep learning is transferred to the local, a local user only interacts model parameters with a central server, and meanwhile, a client side adaptively controls noise addition according to data characteristics.

Description

Power inspection image defect detection method based on federal learning and self-adaptive difference
Technical Field
The invention relates to a power inspection image defect detection method, in particular to a power inspection image defect detection method based on federal learning and self-adaptive difference, and belongs to the technical field of power detection.
Background
With the vigorous development of the construction of the intelligent power grid and the artificial intelligence technology in China, the detection of the defects of the power inspection image becomes one of the important works for guaranteeing the safety of the power grid. The goal of power patrol image defect detection is to accurately discern the category and location information contained in each image. Mainstream power inspection image defect detection algorithms are classified into the following 2 types, one is a traditional defect detection algorithm based on machine learning, and the other is a defect detection algorithm based on deep learning. The former has quite low efficiency of determining and analyzing the state of the power inspection image, and the determination of the parameter range and the selection of the characteristics mainly depend on the field expert experience. The realization of good performance of the target detection algorithm based on deep learning requires that a user uploads data to a data center for centralized training, however, routing inspection image data may contain national road infrastructure and personal privacy information of the user, which causes a serious data island phenomenon and hinders the further development of the deep learning technology in the field of power routing inspection. With the increasingly strict privacy information protection and data security control and the increasingly remarkable phenomenon of data islanding, the traditional method relying on a large number of training data samples for centralized training cannot completely meet the regulations of national power grid on privacy protection and data security. At present, the defect detection of the power inspection image faces a plurality of challenges in the application process, and can be mainly summarized into the following 3 aspects:
(1) the training cost problem. The power inspection image defect detection algorithm usually adopts a centralized training mode, namely all marked data samples are collected, a server with better performance is searched for long-time training, but the mode has higher requirements on the performance and storage of the training server, and especially when large-scale image data are trained, the training time cost is very high.
(2) Data islanding problems. The data isolated island means that the data of the power inspection image are often stored in each power company in a scattered mode, and the data are independently stored and maintained in different power companies to form a physical isolated island. Because the power data usually has the confidentiality requirement, the data among different departments are difficult to share and are mutually independent, and the available scale and the value of the data are greatly limited by the data island problem.
(3) Privacy protection issues. In the unmanned aerial vehicle inspection process, because the image data captured by the camera contains a large amount of private information such as national infrastructure road information and user houses, the leakage of safety and private information can be caused when the target is detected. With the promulgation of Chinese data security laws and the enhancement of personal protection consciousness on private information, the existing target detection method cannot completely meet the requirements of privacy protection and data security sharing. .
Disclosure of Invention
The invention aims to provide a power inspection image defect detection method based on federal learning and self-adaptive difference.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a power inspection image defect detection method based on federal learning and self-adaptive difference comprises the following steps:
step 1: constructing a federal learning scene: the system comprises a central server and more than K clients, wherein K is more than 1; the central server is in bidirectional connection with each client; each client receives the model issued by the central server; the model comprises an input layer, an output layer and more than one intermediate layer; the central server sets an initial model parameter, an iteration time t equal to 0 and a preset detection accuracy; the central server configures the initial model parameters as current global model parameters;
step 2: data preprocessing: the data comprises a normal state image set and more than one fault state image set, and each image in each image set is respectively marked with a category label, an upper left corner abscissa, an upper left corner ordinate, a lower right corner abscissa and a lower right corner ordinate;
and step 3: the central server data issues training data: the central server divides the image data into K groups according to a non-independent and same-distribution mode, and issues a group of data to K clients as local training data of each client; (ii) a
And 4, step 4: the data issuing model parameters of the central server are as follows: the central server transmits the current model parameters and the iteration times to each client as the initial model parameters of each client
Figure BDA0003646100940000021
K is the number of clients;
and 5: client training model parameters: adding 1 to the training times, training the learning model by using local data to obtain local model parameters by each client, finishing the local training when the local model reaches a preset convergence condition or termination condition, and updating the local model parameters
Figure BDA0003646100940000022
Figure BDA0003646100940000023
Is the learning rate of the t-th iteration of client i,
Figure BDA0003646100940000024
is the model gradient of the t-th iteration of the client i;
and 6: uploading local model parameters: each client adds Gaussian noise meeting the difference privacy to the local model parameter of each client, and uploads the local model parameter with the noise to the central server;
and 7: and (3) global model parameter aggregation: the central server carries out weighted aggregation on the contribution weight of the global model based on each client model parameter to obtain an updated global model parameter, and the method comprises the following specific steps:
step 7-1: calculating the aggregate update contribution weight of each client model parameter to the global model;
step 7-2: after receiving the model parameters uploaded by the client, the server calculates to obtain contribution weights based on 7-1, and weights and aggregates the model parameters to obtain updated global model parameters;
and step 8: calculating the model total loss of the server: the central server issues the aggregated global model parameters to each client, each client calculates the local loss under the aggregated global model parameters and uploads the local loss to the central server, and the central server calculates the total model loss of the server, and the calculation method comprises the following steps:
Figure BDA0003646100940000031
wherein n is i For the ith client data sample number,
Figure BDA0003646100940000032
is the sum of client data samples, F i (Z i,t ) Local loss, H, for client i, t round iteration model Z i,t Weight f (Z) of model parameter contributing to global model aggregation after t-th iteration local difference privacy disturbance for client i t ) Aggregating the total loss of the model for the t-th iteration of the central server;
judging whether the total model loss of the server reaches the preset detection accuracy or is greater than the training times; if yes, turning to step 9; otherwise, turning to step 4;
and step 9: updating client model parameters: the central server sends the optimal model parameters to each client, and each client updates the local model parameters with the optimal model parameters.
Further, the step 6 comprises the following specific steps:
step 6-1: each client side utilizes a local model to perform feature extraction on a group of input images to obtain a corresponding feature matrix;
step 6-2: grouping the obtained feature matrixes according to 3 channels of red, green and blue, and performing ReLU activation function operation on the feature matrix of each channel to obtain the feature matrixes of red, green and blue;
step 6-3: hadamard product result matrix W for red, green, and blue feature matrices M Elements greater than 0 add gaussian noise:
W′ M =W M +N‘
w 'in the formula' M Adding Gaussian noise into the feature matrix; n' is a noise matrix;
step 6-4: client-side to client-side i t th iteration local model parameters
Figure BDA0003646100940000033
Adding Gaussian noise N:
Figure BDA0003646100940000034
wherein, W' t,L Adding model parameters of Gaussian noise to the local client, wherein the Gaussian noise satisfies that the mean value is 0 and the covariance is ((delta f sigma) 2 I) I is the identity matrix, Δ f is the maximum variation range of the model parameters, and σ is
Figure BDA0003646100940000035
Where the size of e can be determined at the user's discretion, a smaller e indicates a higher level of privacy protection.
Compared with the prior art, the invention at least has the following beneficial effects:
(1) on the premise of privacy protection and data safety sharing, the method ensures that all participants perform collaborative training, and ensures the high precision of the power inspection image defect detection;
(2) according to the method, a differential privacy technology is introduced on the basis of federal learning, the data storage and model training stage of deep learning is transferred to the local, a local user only interacts model parameters with a central server, and meanwhile, a client side adaptively controls noise addition according to data characteristics to avoid reversely deducing pixel-level images and label information, so that privacy protection and data safety are effectively guaranteed;
(3) the knowledge distillation strategy is adopted, the knowledge learned by the global model is transferred to the local model learning, and the convergence of the local model is accelerated, so that the communication times are effectively reduced, and the communication cost is reduced;
(4) according to the method, the position for adding noise is selected in a self-adaptive mode according to a local training data sample, and the negative influence of noise relief on the accuracy of the model is utilized;
(5) according to the method, an evaluation mechanism based on the contribution of the client model parameters is adopted, the contribution capacity of each client model training to the global model parameter aggregation is effectively evaluated, the global model parameters are subjected to weighted aggregation aiming at the model parameter contribution weight, the influence of malicious equipment on the accuracy of a federated learning aggregation model is effectively reduced, and the federated learning is ensured to be more robust.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a federated learning architecture in accordance with an embodiment of the present invention;
fig. 3 is a schematic diagram of uploading local model parameters and adding noise in the embodiment of the present invention.
Detailed Description
Example 1
Step 1: constructing a federal learning scene: the system comprises a central server and more than K clients, wherein K is more than 1; the central server is in bidirectional connection with each client; each client receives the model issued by the central server; the model comprises an input layer, an output layer and more than one intermediate layer; the central server sets an initial model parameter, an iteration time t equal to 0 and a preset detection accuracy; the central server configures the initial model parameters as current global model parameters;
step 2: data preprocessing: the data comprises a normal state image set and more than one fault state image set, and each image in each image set is respectively marked with a category label, an upper left corner abscissa, an upper left corner ordinate, a lower right corner abscissa and a lower right corner ordinate;
and step 3: the central server data issues training data: the central server divides the image data into K groups according to a non-independent and same-distribution mode, and issues a group of data to K clients as local training data of each client; (ii) a
And 4, step 4: the data issuing model parameters of the central server are as follows: the central server transmits the current model parameters and the iteration times to each client as the initial model parameters of each client
Figure BDA0003646100940000051
K is the number of clients;
and 5: client training model parameters: adding 1 to the training times, training the learning model by using local data to obtain local model parameters by each client, finishing the local training when the local model reaches a preset convergence condition or termination condition, and updating the local model parameters
Figure BDA0003646100940000052
Figure BDA0003646100940000053
Is the learning rate of the t-th iteration of client i,
Figure BDA0003646100940000054
is the model gradient of the t-th iteration of the client i;
step 6: uploading local model parameters: each client adds Gaussian noise meeting the difference privacy to the local model parameter, and uploads the local model parameter with noise to the central server;
and 7: and (3) global model parameter aggregation: the central server carries out weighted aggregation on the contribution weight of the global model based on each client model parameter to obtain an updated global model parameter, and the method comprises the following specific steps:
step 7-1: calculating the aggregate update contribution weight of each client model parameter to the global model;
step 7-2: after receiving the model parameters uploaded by the client, the server calculates to obtain contribution weights based on 7-1, and weights and aggregates the model parameters to obtain updated global model parameters;
and 8: calculating the model total loss of the server: the central server issues the aggregated global model parameters to each client, each client calculates the local loss under the aggregated global model parameters and uploads the local loss to the central server, and the central server calculates the total model loss of the server, wherein the calculation method comprises the following steps:
Figure BDA0003646100940000055
wherein n is i For the ith client data sample number,
Figure BDA0003646100940000056
is the sum of client data samples, F i (Z i,t ) Local loss, H, for the client i, the t-th round iteration model Z i,t Weight f (Z) of model parameter contributing to global model aggregation after t-th iteration local difference privacy disturbance for client i t ) Aggregating the total loss of the model for the t-th iteration of the central server;
judging whether the total model loss of the server reaches the preset detection accuracy or is greater than the training times; if yes, turning to step 9; otherwise, turning to step 4;
and step 9: updating client model parameters: the central server sends the optimal model parameters to each client, and each client updates the local model parameters with the optimal model parameters.
Further, the step 6 includes the following specific steps, as shown in fig. 3:
step 6-1: each client side utilizes a local model to perform feature extraction on a group of input images to obtain a corresponding feature matrix;
step 6-2: grouping the obtained feature matrixes according to 3 channels of red, green and blue, and performing ReLU activation function operation on the feature matrix of each channel to obtain the feature matrixes of red, green and blue;
step 6-3: hadamard product result matrix W for red, green, and blue feature matrices M Elements greater than 0 add gaussian noise:
W′ M =W M +N‘
w 'in the formula' M Adding Gaussian noise into the feature matrix; n' is a noise matrix;
step 6-4: client-side to client-side ith iteration local model parameters
Figure BDA0003646100940000061
Adding Gaussian noise N:
Figure BDA0003646100940000062
wherein, W' t,L Adding model parameters of Gaussian noise to the local client, wherein the Gaussian noise satisfies that the mean value is 0 and the covariance is ((delta f sigma) 2 I) I is the identity matrix, Δ f is the maximum variation range of the model parameters, and σ is
Figure BDA0003646100940000063
The size of e can be determined by the user, and a smaller e represents a higher privacy protection level, and the value in this embodiment is 1.
Further, the step 7 comprises the following specific steps:
7-1, calculating contribution weights of all client model parameters to global model aggregation updating, wherein the specific calculation method is as formulas (4) - (9).
Figure BDA0003646100940000064
Wherein (4) is an element [ a ] ji(t) ]Representing the change situation of j-th dimension model parameters in the ith client in the tth iteration, (j is more than or equal to 1 and is less than or equal to m, i is more than or equal to 1 and is less than or equal to k), wherein m is the dimension of the model parameters, and k is the number of clients participating in the federal training in the tth iteration;
Figure BDA0003646100940000065
(5) in the formula, the element [ b ji(t) ]And representing the mean value of the change of j-th dimension model parameters in k groups of clients in the t-th iteration, (j is more than or equal to 1 and less than or equal to m, i is more than or equal to 1 and less than or equal to k), wherein m is the dimension of the model parameters, and k is the number of clients participating in the federal training in the t-th iteration.
D m×k(t) =[d ji(t) ]=A m×k(t) -B m×k(t) (6)
Figure BDA0003646100940000066
The expressions (6) to (7) are only used for deriving the calculation to obtain the expression (8) e iz(t) Without other special meanings.
Assuming that initial contribution weights of models uploaded by local training of each client to global model aggregation are the same, the initial contribution weights are all
Figure BDA0003646100940000067
k is the number of clients participating in the federal training in the t round.
Figure BDA0003646100940000071
H in formula (8) i,t Representing the contribution weight of the model uploaded by the client i in the t-th iteration to the global model aggregation, SUM (e) iz(t) <0) And representing whether uploaded model parameters of the client i and the client z in the t-th iteration are mutually restricted to the global model aggregation contribution. Wherein k is the number of clients participating in the federated training in the t-th iteration process, (1 is not less than i, z is not less than k and i is not equal to z),
Figure BDA0003646100940000072
The calculation formula (2) is shown as (9).
Figure BDA0003646100940000073
After receiving the model parameters uploaded by the client, the server obtains a contribution weight based on calculation, weights and aggregates the model parameters to obtain new global model parameters, and updates the global model parameters as shown in formula (10):
Figure BDA0003646100940000074
wherein, W' t,G Is the model parameter of the central server after the t-th iteration update, k is the number of clients participating in the federal training, W' i,t,L The model parameter H after the local privacy disturbance of the local client i after the t-th iteration is obtained i,t And (4) carrying out local difference privacy disturbance on the model parameter of the t-th iteration client i to obtain a contribution weight value of the global model aggregation.
The image sample labeling is used for carrying out a series of operations on the collected image, then a txt/xml file with the same name as the image is generated, and target information is recorded in the txt/xml file and comprises the following steps: category label, upper left abscissa, upper left ordinate, lower right abscissa, and lower right ordinate. The high-resolution image data set shot by the unmanned aerial vehicle in the transmission line inspection process comprises 4 types of data sets, and 1742 sample labels. Wherein bolt normal sample label ls-zc is 1286, bolt nut missing sample label ls-qlm is 59, bolt pin missing sample label ls-qxd is 174, and bolt washer missing sample label ls-qdp is 223. The data set division is to distribute the image data to the following K clients in a non-independent and equally-distributed mode.
The present embodiment employs a pre-trained bolt defect detection model as the initial global model of the central server. And transmitting the initial global model to all clients participating in training. The initial global model is obtained by training images marked on the central server until the number of model iterations or the accuracy of the model.
And the client side trains by using the respective local power inspection image data, finishes local training when the model converges or reaches a set termination condition, and updates the local model. The knowledge distillation strategy is adopted in the local training process of the model, so that the convergence speed of the local model is increased, the communication times are reduced, and the communication cost is effectively reduced. The training process of the knowledge distillation strategy is divided into the following 2 steps: firstly, selecting an intermediate layer of a central server model as a basis, and only performing supervised learning on the intermediate layer of a local model to promote the local model to rapidly learn the intermediate feature expression capability of the central server model. And secondly, training all model layers of the local model.
The client adds noise to the trained model parameters through a self-adaptive local differential privacy strategy, and uploads the model parameters with the noise to the server.

Claims (2)

1. A power inspection image defect detection method based on federal learning and self-adaptive difference is characterized by comprising the following steps:
step 1: constructing a federal learning scene: the system comprises a central server and more than K clients, wherein K is more than 1; the central server is in bidirectional connection with each client; each client receives the model issued by the central server; the model comprises an input layer, an output layer and more than one intermediate layer; the central server sets initial model parameters, the iteration times t is 0 and the detection accuracy is preset; the central server configures the initial model parameters as current global model parameters;
step 2: data preprocessing: the data comprises a normal state image set and more than one fault state image set, and each image in each image set is respectively marked with a category label, an upper left corner abscissa, an upper left corner ordinate, a lower right corner abscissa and a lower right corner ordinate;
and step 3: the central server data issues training data: the central server divides the image data into K groups according to a non-independent and same-distribution mode, and issues a group of data to K clients as local training data of each client; (ii) a
And 4, step 4: the data issuing model parameters of the central server are as follows: the central server transmits the current model parameters and the iteration times to each client as the initial model parameters of each client
Figure FDA0003646100930000011
K is the number of clients;
and 5: client training model parameters: adding 1 to the training times, training the learning model by using local data to obtain local model parameters by each client, finishing the local training when the local model reaches a preset convergence condition or termination condition, and updating the local model parameters
Figure FDA0003646100930000012
Figure FDA0003646100930000013
Is the learning rate of the t-th iteration of client i,
Figure FDA0003646100930000014
is the model gradient of the t-th iteration of the client i;
step 6: uploading local model parameters: each client adds Gaussian noise meeting the difference privacy to the local model parameter of each client, and uploads the local model parameter with the noise to the central server;
and 7: and (3) global model parameter aggregation: the central server carries out weighted aggregation on the contribution weight of the global model based on each client model parameter to obtain an updated global model parameter, and the method comprises the following specific steps:
step 7-1: calculating the aggregate update contribution weight of each client model parameter to the global model;
step 7-2: after receiving the model parameters uploaded by the client, the server calculates to obtain contribution weights based on 7-1, and weights and aggregates the model parameters to obtain updated global model parameters;
and 8: calculating the model total loss of the server: the central server issues the aggregated global model parameters to each client, each client calculates the local loss under the aggregated global model parameters and uploads the local loss to the central server, and the central server calculates the total model loss of the server, and the calculation method comprises the following steps:
Figure FDA0003646100930000015
wherein n is i For the ith client data sample number,
Figure FDA0003646100930000016
is the sum of client data samples, F i (Z i,t ) Local loss, H, for client i, t round iteration model Z i,t Weight f (Z) of model parameter contributing to global model aggregation after t-th iteration local difference privacy disturbance for client i t ) Aggregating the total loss of the model for the t-th iteration of the central server;
judging whether the total model loss of the server reaches the preset detection accuracy or is greater than the training times; if yes, turning to step 9; otherwise, turning to step 4;
and step 9: updating client model parameters: the central server sends the optimal model parameters to each client, and each client updates the local model parameters with the optimal model parameters.
2. The power inspection image defect detection method based on federal learning and adaptive difference as claimed in claim 1, wherein the step 6 comprises the following specific steps:
step 6-1: each client side utilizes a local model to perform feature extraction on a group of input images to obtain a corresponding feature matrix;
step 6-2: grouping the obtained feature matrixes according to 3 channels of red, green and blue, and performing ReLU activation function operation on the feature matrix of each channel to obtain the feature matrixes of red, green and blue;
step 6-3: hadamard product result matrix W for red, green, and blue feature matrices M Elements greater than 0 add gaussian noise:
W′ M =W M +N’
w 'in the formula' M Adding Gaussian noise into the feature matrix; n' is a noise matrix;
step 6-4: client-side to client-side ith iteration local model parameters
Figure FDA0003646100930000021
Adding Gaussian noise N:
Figure FDA0003646100930000022
wherein, W' t,L Adding model parameters of Gaussian noise to the local client, wherein the Gaussian noise satisfies that the mean value is 0 and the covariance is ((delta f sigma) 2 i) I is the identity matrix, Δ f is the maximum variation range of the model parameters, and σ is
Figure FDA0003646100930000023
Where the size of e can be determined at the user's discretion, a smaller e indicates a higher level of privacy protection.
CN202210530713.3A 2022-05-16 2022-05-16 Power inspection image defect detection method based on federal learning and self-adaptive difference Pending CN114897837A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210530713.3A CN114897837A (en) 2022-05-16 2022-05-16 Power inspection image defect detection method based on federal learning and self-adaptive difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210530713.3A CN114897837A (en) 2022-05-16 2022-05-16 Power inspection image defect detection method based on federal learning and self-adaptive difference

Publications (1)

Publication Number Publication Date
CN114897837A true CN114897837A (en) 2022-08-12

Family

ID=82723613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210530713.3A Pending CN114897837A (en) 2022-05-16 2022-05-16 Power inspection image defect detection method based on federal learning and self-adaptive difference

Country Status (1)

Country Link
CN (1) CN114897837A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962988A (en) * 2021-12-08 2022-01-21 东南大学 Power inspection image anomaly detection method and system based on federal learning
CN115410103A (en) * 2022-09-01 2022-11-29 河海大学 Dam defect identification model rapid convergence method based on federal learning
CN115510472A (en) * 2022-11-23 2022-12-23 南京邮电大学 Cloud edge aggregation system-oriented multiple differential privacy protection method and system
CN115761378A (en) * 2022-12-07 2023-03-07 东南大学 Power inspection image classification and detection method and system based on federal learning
CN116029367A (en) * 2022-12-26 2023-04-28 东北林业大学 Fault diagnosis model optimization method based on personalized federal learning
CN116127417A (en) * 2023-04-04 2023-05-16 山东浪潮科学研究院有限公司 Code defect detection model construction method, device, equipment and storage medium
CN116186629A (en) * 2023-04-27 2023-05-30 浙江大学 Financial customer classification and prediction method and device based on personalized federal learning
CN116503420A (en) * 2023-04-26 2023-07-28 佛山科学技术学院 Image segmentation method based on federal learning and related equipment
CN116502950A (en) * 2023-04-26 2023-07-28 佛山科学技术学院 Defect detection method based on federal learning and related equipment
CN116596865A (en) * 2023-05-05 2023-08-15 深圳市大数据研究院 Defect detection method, defect detection system and robot

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113962988A (en) * 2021-12-08 2022-01-21 东南大学 Power inspection image anomaly detection method and system based on federal learning
CN113962988B (en) * 2021-12-08 2024-04-09 东南大学 Power inspection image anomaly detection method and system based on federal learning
CN115410103A (en) * 2022-09-01 2022-11-29 河海大学 Dam defect identification model rapid convergence method based on federal learning
CN115410103B (en) * 2022-09-01 2023-04-25 河海大学 Dam defect identification model rapid convergence method based on federal learning
CN115510472A (en) * 2022-11-23 2022-12-23 南京邮电大学 Cloud edge aggregation system-oriented multiple differential privacy protection method and system
CN115510472B (en) * 2022-11-23 2023-04-07 南京邮电大学 Multi-difference privacy protection method and system for cloud edge aggregation system
CN115761378A (en) * 2022-12-07 2023-03-07 东南大学 Power inspection image classification and detection method and system based on federal learning
CN115761378B (en) * 2022-12-07 2023-08-01 东南大学 Power inspection image classification and detection method and system based on federal learning
CN116029367A (en) * 2022-12-26 2023-04-28 东北林业大学 Fault diagnosis model optimization method based on personalized federal learning
CN116127417A (en) * 2023-04-04 2023-05-16 山东浪潮科学研究院有限公司 Code defect detection model construction method, device, equipment and storage medium
CN116503420A (en) * 2023-04-26 2023-07-28 佛山科学技术学院 Image segmentation method based on federal learning and related equipment
CN116502950A (en) * 2023-04-26 2023-07-28 佛山科学技术学院 Defect detection method based on federal learning and related equipment
CN116503420B (en) * 2023-04-26 2024-05-14 佛山科学技术学院 Image segmentation method based on federal learning and related equipment
CN116186629A (en) * 2023-04-27 2023-05-30 浙江大学 Financial customer classification and prediction method and device based on personalized federal learning
CN116596865B (en) * 2023-05-05 2024-04-16 深圳市大数据研究院 Defect detection method, defect detection system and robot
CN116596865A (en) * 2023-05-05 2023-08-15 深圳市大数据研究院 Defect detection method, defect detection system and robot

Similar Documents

Publication Publication Date Title
CN114897837A (en) Power inspection image defect detection method based on federal learning and self-adaptive difference
CN113962988B (en) Power inspection image anomaly detection method and system based on federal learning
CN112770291B (en) Distributed intrusion detection method and system based on federal learning and trust evaluation
CN109617888B (en) Abnormal flow detection method and system based on neural network
CN107105320B (en) A kind of Online Video temperature prediction technique and system based on user emotion
CN108520155B (en) Vehicle behavior simulation method based on neural network
CN114912705A (en) Optimization method for heterogeneous model fusion in federated learning
CN105488528A (en) Improved adaptive genetic algorithm based neural network image classification method
CN116187469A (en) Client member reasoning attack method based on federal distillation learning framework
CN113746663B (en) Performance degradation fault root cause positioning method combining mechanism data and dual drives
CN109359815A (en) Based on the smart grid deep learning training sample generation method for generating confrontation network
CN114777192B (en) Secondary network heat supply autonomous optimization regulation and control method based on data association and deep learning
CN112087442A (en) Time sequence related network intrusion detection method based on attention mechanism
CN114553661A (en) Mobile user equipment clustering training method for wireless federal learning
CN113901448A (en) Intrusion detection method based on convolutional neural network and lightweight gradient elevator
CN114970886A (en) Clustering-based adaptive robust collaborative learning method and device
CN116306911A (en) Distributed machine learning-based thermodynamic station load prediction and optimization control method
CN105844334A (en) Radial basis function neural network-based temperature interpolation algorithm
CN110766201A (en) Revenue prediction method, system, electronic device, computer-readable storage medium
CN117371555A (en) Federal learning model training method based on domain generalization technology and unsupervised clustering algorithm
CN112270397A (en) Color space conversion method based on deep neural network
CN116933860A (en) Transient stability evaluation model updating method and device, electronic equipment and storage medium
Wan et al. Capturing Spatial-Temporal Correlations with Attention Based Graph Convolutional Networks for Network Traffic Prediction
Li et al. Topology-Aware-based Traffic Prediction Mechanism for Elastic Cognitive Optical Networks
Liu Simulation Training Auxiliary Model Based on Neural Network and Virtual Reality Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination