CN113989627A - Urban prevention and control image detection method and system based on asynchronous federal learning - Google Patents

Urban prevention and control image detection method and system based on asynchronous federal learning Download PDF

Info

Publication number
CN113989627A
CN113989627A CN202111633287.8A CN202111633287A CN113989627A CN 113989627 A CN113989627 A CN 113989627A CN 202111633287 A CN202111633287 A CN 202111633287A CN 113989627 A CN113989627 A CN 113989627A
Authority
CN
China
Prior art keywords
training
model
local
asynchronous
global model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111633287.8A
Other languages
Chinese (zh)
Other versions
CN113989627B (en
Inventor
袁戟
常可欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wanwuyun Technology Co ltd
Original Assignee
Shenzhen Wanwuyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wanwuyun Technology Co ltd filed Critical Shenzhen Wanwuyun Technology Co ltd
Priority to CN202111633287.8A priority Critical patent/CN113989627B/en
Publication of CN113989627A publication Critical patent/CN113989627A/en
Application granted granted Critical
Publication of CN113989627B publication Critical patent/CN113989627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/008Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols involving homomorphic encryption

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an urban prevention and control image detection method based on asynchronous federal learning, which comprises the following steps that a cloud server initializes a global model; each end device carries out scene labeling on the urban prevention and control image data, divides the urban prevention and control image data into a training set and a testing set, and screens out a data set which is distributed close to a sample of the testing set from the training set as a to-be-trained set; acquiring a global model from a cloud server, and performing local training on a to-be-trained set to obtain a local model; each end device uploads the local model to a cloud server after homomorphic encryption; and the cloud server performs global model training on the local model by adopting an asynchronous federated calculation strategy to obtain an updated global model.

Description

Urban prevention and control image detection method and system based on asynchronous federal learning
Technical Field
The invention relates to the technical field of image recognition, in particular to an urban prevention and control image detection method and system based on asynchronous federal learning.
Background
At present, the image recognition scenes of urban prevention and control mainly include application scenes such as road surface construction damage, vehicle parking violation, river surface floating objects, garbage dumping and the like. Since the above application scenarios are widely distributed in various parts of a city, collaboration between different companies is required. For example, company a uses a camera to acquire image data, company B uses an unmanned aerial vehicle mounted camera to acquire images, and since pictures acquired by different companies cannot be directly shared due to data security, privacy protection and the like, a federal training network needs to be established to extract data features locally in each company, data is pushed to a cloud for global training by using a homomorphic encryption technology, and trained network parameters are respectively transmitted back to company a and company B for subsequent model application after decryption.
Due to the fact that the frequency of image updating and model training of different companies is different, global synchronous federal calculation needs to consume more manpower and material resources to carry out unified management on side equipment, and efficiency is low; and the image angle that acquires through unmanned aerial vehicle and fixed camera etc. can show the difference, leads to not accurate in the training.
Disclosure of Invention
The invention aims to provide an urban prevention and control image detection method and system based on asynchronous federal learning, and aims to solve the problems that in the prior art, a global synchronous federal calculation strategy is adopted, more manpower and material resources are consumed to carry out unified management on edge-end equipment, the efficiency is low, and training is not accurate due to differences of image angles and the like.
In a first aspect, an embodiment of the present invention provides an urban prevention and control image detection method based on asynchronous federal learning, including:
s101, a cloud server initializes a global model;
s102, each end device carries out scene labeling on city prevention and control image data, divides the city prevention and control image data into a training set and a testing set, and screens out a data set which is distributed close to a sample of the testing set from the training set as a to-be-trained set;
s103, acquiring the global model from a cloud server, and performing local training on the to-be-trained set to obtain a local model;
s104, each end device uploads the local model to a cloud server after homomorphic encryption;
s105, the cloud server performs global model training on the local model by adopting an asynchronous federated computing strategy to obtain an updated global model;
and S106, turning to the step S102 until the global model on the cloud server reaches the expected performance.
In a second aspect, an embodiment of the present invention provides an urban prevention and control image detection system based on asynchronous federal learning, which is characterized by including: the system comprises a cloud server and end devices;
the cloud server is used for initializing a global model; carrying out global model training on the local model by adopting an asynchronous federal calculation strategy to obtain an updated global model;
the end equipment is used for carrying out scene labeling on the urban prevention and control image data, dividing the urban prevention and control image data into a training set and a testing set, and screening out a data set which is distributed close to a sample of the testing set from the training set to serve as a to-be-trained set; acquiring the global model from a cloud server, and carrying out local training on the set to be trained to obtain a local model; and uploading the local model to a cloud server after homomorphic encryption.
According to the embodiment of the invention, the safety is well guaranteed by limiting the urban prevention and control image data in each end device for marking and training; the cloud server performs weight calculation on the local models trained by the end devices through an asynchronous federal calculation strategy, selects the local model with larger weight to perform global training, so that the local models participating in training are more closely associated and have higher relevancy, and compared with a synchronous federal calculation strategy, the cloud server does not need to perform global training after all the end devices train the local models of the current round, so that the efficiency is higher, and the end devices which are not used can perform other work and cannot be always occupied;
meanwhile, the training sets in the urban prevention and control image data are screened through the equipment at each end, a to-be-trained set which is close to the data distribution of the test set is obtained, and the training accuracy of the model is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a city prevention and control image detection method based on asynchronous federal learning according to an embodiment of the present invention;
fig. 2 is a system block diagram of an urban prevention and control image detection system based on asynchronous federal learning according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, a city prevention and control image detection method based on asynchronous federal learning includes:
s101, a cloud server initializes a global model;
s102, each end device carries out scene labeling on city prevention and control image data, divides the city prevention and control image data into a training set and a testing set, and screens out a data set which is distributed close to a sample of the testing set from the training set as a to-be-trained set;
s103, acquiring the global model from a cloud server, and performing local training on the to-be-trained set to obtain a local model;
s104, each end device uploads the local model to a cloud server after homomorphic encryption;
s105, the cloud server performs global model training on the local model by adopting an asynchronous federated computing strategy to obtain an updated global model;
and S106, turning to the step S102 until the global model on the cloud server reaches the expected performance.
In the embodiment, the safety is well guaranteed by limiting the urban prevention and control image data in each end device for marking and training; and through asynchronous federal calculation strategy, the high in the clouds server carries out weight calculation to the local model of each end equipment training, select the great local model of weight to carry out the global training, make the contact between the local model of participating in the training inseparabler, the degree of correlation is higher, and compare with synchronous federal calculation strategy, need not wait all end equipment to train out the local model of this round and just can carry out the global training after, and efficiency is higher, and the end equipment that does not use can carry out other work, can not be occupied always.
Meanwhile, the training sets in the urban prevention and control image data are screened through the equipment at each end, a to-be-trained set which is close to the data distribution of the test set is obtained, and the training accuracy of the model is improved.
The city prevention and control image types comprise road surface construction damage, vehicle illegal parking, river surface floating objects, garbage dumping and the like.
The end equipment can be processors distributed in different companies or processors distributed in the same company; can be distributed in different corners of the same city, or can be centralized in a machine room.
Because each end device can obtain a large number of images through each image obtaining terminal, each end device needs to label the images after obtaining the images, and executes the step S102, and executes the steps S102-S105 in a circulating manner; the time interval from S105 to the beginning of the step S102 may be a preset time, or a certain preset condition is reached, so that the whole step operation may not be performed when any image is obtained, which wastes the running resources of the end device, and it is not necessary, and only after enough city prevention and control image data is obtained, the labeling and local training are performed, which may be set according to the actual situation.
In an embodiment, the screening out, from the training set, a data set that is distributed close to the test set samples as a to-be-trained set includes:
calculating a data distribution of the test set
Figure DEST_PATH_IMAGE001
Calculating the feature distribution KL divergence of the training set and the test set by adopting the following formula:
Figure 380195DEST_PATH_IMAGE002
judging whether the training set is close to the sample distribution of the test set or not according to the KL divergence;
if yes, sampling is not needed to be carried out on the training set;
and if not, sampling the training set to obtain the set to be trained.
Wherein,
Figure DEST_PATH_IMAGE003
representing the corresponding normalized features of the partially sampled data in the kth end device,
Figure 589590DEST_PATH_IMAGE004
a data distribution representing the training set is determined,
Figure DEST_PATH_IMAGE005
representing the normalized data characteristics collected by an end device.
In this embodiment, the KL divergence is used to measure the distance between the distributions;
wherein the distribution data is distributed
Figure 72656DEST_PATH_IMAGE001
The model is obtained by calculation through a test set and can be determined through a probability density equation pdf (probability density function) in the calculation as a module for displaying kdeplot in matplotlib.
Judging whether the training set is close to the sample distribution of the test set according to the KL divergence, wherein a threshold value is mainly set, if the KL divergence is smaller than the threshold value, the two samples are judged to be close enough, otherwise, the samples are judged to be required to be sampled, and the samples close enough are obtained and used as a set to be trained; the threshold value needs to be adjusted according to the actual distribution form and the training result on the test set, and is not a specific value.
Specifically, samples in the training set are subjected to screening sampling through a metroplis algorithm.
In an embodiment, the sampling the training set to obtain the set to be trained includes:
data distribution from the training set
Figure 452821DEST_PATH_IMAGE006
Randomly sampling image data features
Figure DEST_PATH_IMAGE007
According to the suggested distribution
Figure 919181DEST_PATH_IMAGE008
Compliance
Figure DEST_PATH_IMAGE009
Sampling to obtain image data features
Figure 779821DEST_PATH_IMAGE010
The first intermediate value is calculated using the following formula
Figure DEST_PATH_IMAGE011
Figure 417476DEST_PATH_IMAGE012
From
Figure DEST_PATH_IMAGE013
Sampling in the uniform distribution to obtain a second intermediate value u;
judging whether the first intermediate value is greater than or equal to a second intermediate value;
if so, accepting the image data feature
Figure 566829DEST_PATH_IMAGE010
Into the set to be trained, i.e.
Figure 416973DEST_PATH_IMAGE014
If not, receiving image data characteristics
Figure 66873DEST_PATH_IMAGE007
Into the set to be trained, i.e.
Figure DEST_PATH_IMAGE015
Repeating the above operations to determine the image dataSign for
Figure 891740DEST_PATH_IMAGE007
Screening is carried out, and obedience data distribution is obtained after T rounds of iteration
Figure 980919DEST_PATH_IMAGE001
The set of image data features of;
carrying out reverse iteration on the image data feature set to obtain the to-be-trained set;
wherein,
Figure 634754DEST_PATH_IMAGE016
to represent
Figure 938828DEST_PATH_IMAGE010
The distribution of the data of (a) is,
Figure DEST_PATH_IMAGE017
to represent
Figure 934597DEST_PATH_IMAGE007
The data distribution of (2).
In this example, after obtaining the sample
Figure 511072DEST_PATH_IMAGE018
Due to the fact that
Figure 437439DEST_PATH_IMAGE018
To generate samples, a reverse iteration is required to obtain samples
Figure DEST_PATH_IMAGE019
Neutralization
Figure 61931DEST_PATH_IMAGE018
Best matched sample
Figure 681131DEST_PATH_IMAGE020
Calculated using the following formula
Figure 495635DEST_PATH_IMAGE020
Figure 225693DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
Wherein,
Figure 707621DEST_PATH_IMAGE022
to represent
Figure 497723DEST_PATH_IMAGE022
And (4) norm.
In an embodiment, the obtaining the global model from the cloud server, and performing local training on the to-be-trained set to obtain a local model includes:
and each end device adopts a spark R-CNN model to carry out local training on the marked urban prevention and control image data.
In this embodiment, the single-stage model of YOLO is adopted in the prior art, and compared with the single-stage model of YOLO, the Sparse R-CNN model has higher precision.
In an embodiment, the training the local model by using an asynchronous federated calculation strategy to obtain an updated global model includes:
and the cloud server trains the local model by adopting a spark R-CNN model to obtain an updated global model.
In this embodiment, the single-stage model of YOLO is adopted in the prior art, and compared with the single-stage model of YOLO, the Sparse R-CNN model has higher precision.
In an embodiment, the training of the local model by using an asynchronous federated calculation strategy to obtain an updated global model (step S104) includes:
s201, the cloud server counts the number of local models participating in the global model training, and judges whether each local model participates in past training in a specified time window;
s202, if the local model is judged to be negative, the local model is not added with the global model training;
and S203, if the weight is judged to be positive, giving corresponding weight to the local model, and adding global model training.
In this embodiment, each local model is determined to see whether it has participated in past training in a specified time window (within the specified time window, the relationship between the local models is closer, the correlation is higher, and the accuracy of the trained global model is higher), that is, the local model trained within a specific number of rounds of global training is different from the local model trained in the current round of global training.
And if so, giving corresponding weight to the local model, wherein the weight is larger as the number of rounds different from the global training of the current round is closer.
Wherein the specified window may be the number of rounds of global training.
In one embodiment, the determining whether each of the local models has engaged in past training in a specified time window comprises:
determining whether each local model has engaged in past training in a specified time window using the following formula:
Figure DEST_PATH_IMAGE023
wherein s is the number of rounds of the current global model update,
Figure 579948DEST_PATH_IMAGE024
for the number of rounds a local model has taken part in training,
Figure DEST_PATH_IMAGE025
and the time window is a super parameter and is used for setting the local model time window of how many versions can be accommodated in the global model training.
In this embodiment, generally, the number of rounds in which the local model participates in training is equal to the number of rounds updated by the global model in the previous round.
In an embodiment, if yes, assigning a corresponding weight to the local model, and adding global model training, including:
s301, corresponding weight is given to the local model by adopting the following formula:
Figure 130009DEST_PATH_IMAGE026
s302, calculating the influence weight of each local model in the S-th round on the global model by adopting the following formula:
Figure DEST_PATH_IMAGE027
wherein,
Figure 715712DEST_PATH_IMAGE028
for the amount of data in a local model that the same round participates in updating,
Figure DEST_PATH_IMAGE029
the total amount of data for all local models participating in the update for the same round,
Figure 221255DEST_PATH_IMAGE030
is the weight value of the local model for the s-th round,
Figure DEST_PATH_IMAGE031
the weight values of the global model for round s,
Figure 275930DEST_PATH_IMAGE032
for each set of local model codes for the s-th round,
Figure DEST_PATH_IMAGE033
in order to obtain a learning rate,
Figure 144528DEST_PATH_IMAGE034
to determine the hyper-parameters that influence the decay rate of the weights.
In this embodiment, if
Figure 69890DEST_PATH_IMAGE023
The local model can participate in the global model training, and the training participation degree given to the local model by calculating a proper weight value through a formula is achieved to achieve higher precision.
In an embodiment, if the determination is yes, the local model is given corresponding weight, a global model training is added, and then the method includes:
s401, after the updated global model is obtained through training, the cloud server sends the global model to each end device;
s402, the step S102 is carried out until the global model on the cloud server reaches the expected performance.
In this embodiment, the updated global model is sent back to each end device again, the local model on the end device is updated, the local model uploaded to the cloud server every time is guaranteed to be heavy, and the prediction accuracy is improved.
Referring to fig. 2, an urban prevention and control image detection system based on asynchronous federal learning includes a cloud server and end devices;
the cloud server is used for initializing a global model; carrying out global model training on the local model by adopting an asynchronous federal calculation strategy to obtain an updated global model;
the end equipment is used for carrying out scene labeling on the urban prevention and control image data, dividing the urban prevention and control image data into a training set and a testing set, and screening out a data set which is distributed close to a sample of the testing set from the training set to serve as a to-be-trained set; acquiring the global model from a cloud server, and carrying out local training on the set to be trained to obtain a local model; and uploading the local model to a cloud server after homomorphic encryption.
In an embodiment, the end device is further configured to locally train the labeled city prevention and control image data using a Sparse R-CNN model.
In an embodiment, the cloud server is configured to train the local model by using a Sparse R-CNN model to obtain an updated global model.
In an embodiment, the cloud server is configured to count the number of local models participating in the global model training, and determine whether each local model participates in past training in a specified time window;
if not, the local model is not added with the global model training;
if the local model is judged to be the global model, corresponding weight is given to the local model, and global model training is added.
In one embodiment, the cloud server is configured to determine whether each local model has been in past training within a specified time window using the following formula:
Figure 936215DEST_PATH_IMAGE023
wherein s is the number of rounds of the current global model update,
Figure 461874DEST_PATH_IMAGE024
for the number of rounds a local model has taken part in training,
Figure 71847DEST_PATH_IMAGE025
and the time window is a super parameter and is used for setting the local model time window of how many versions can be accommodated in the global model training.
In an embodiment, the cloud server is configured to assign a corresponding weight to the local model by using the following formula:
Figure 851716DEST_PATH_IMAGE026
and calculating the influence weight of each local model on the global model in the s-th round by adopting the following formula:
Figure 154521DEST_PATH_IMAGE027
wherein,
Figure 901897DEST_PATH_IMAGE028
for the amount of data in a local model that the same round participates in updating,
Figure 63364DEST_PATH_IMAGE029
the total amount of data for all local models participating in the update for the same round,
Figure 212585DEST_PATH_IMAGE030
is the weight value of the local model for the s-th round,
Figure 686292DEST_PATH_IMAGE031
the weight values of the global model for round s,
Figure 655385DEST_PATH_IMAGE032
for each set of local model codes for the s-th round,
Figure 357893DEST_PATH_IMAGE033
in order to obtain a learning rate,
Figure 627200DEST_PATH_IMAGE034
to determine the hyper-parameters that influence the decay rate of the weights.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A city prevention and control image detection method based on asynchronous federal learning is characterized by comprising the following steps:
s101, a cloud server initializes a global model;
s102, each end device carries out scene labeling on city prevention and control image data, divides the city prevention and control image data into a training set and a testing set, and screens out a data set which is distributed close to a sample of the testing set from the training set as a to-be-trained set;
s103, acquiring the global model from a cloud server, and performing local training on the to-be-trained set to obtain a local model;
s104, each end device uploads the local model to a cloud server after homomorphic encryption;
s105, the cloud server performs global model training on the local model by adopting an asynchronous federated computing strategy to obtain an updated global model;
and S106, turning to the step S102 until the global model on the cloud server reaches the expected performance.
2. The urban defense and control image detection method based on asynchronous federal learning according to claim 1, wherein the step of obtaining the global model from a cloud server and performing local training on the set to be trained to obtain a local model comprises the steps of:
and each end device adopts a spark R-CNN model to carry out local training on the marked set to be trained.
3. The city prevention and control image detection method based on asynchronous federal learning as claimed in claim 1, wherein the screening out the data set which is distributed close to the test set sample from the training set as the to-be-trained set comprises:
calculating a data distribution of the test set
Figure 110501DEST_PATH_IMAGE001
Calculating the feature distribution KL divergence of the training set and the test set by adopting the following formula:
Figure 808067DEST_PATH_IMAGE002
judging whether the training set is close to the sample distribution of the test set or not according to the KL divergence;
if yes, sampling is not needed to be carried out on the training set;
if not, sampling the training set to obtain the set to be trained;
wherein,
Figure 118963DEST_PATH_IMAGE003
representing the normalized data characteristic corresponding to the partially sampled data in the kth end device,
Figure 45330DEST_PATH_IMAGE004
a data distribution representing the training set is determined,
Figure 125282DEST_PATH_IMAGE005
representing the normalized data characteristics collected by an end device.
4. The urban defense and control image detection method based on asynchronous federal learning according to claim 3, wherein the sampling of the training set to obtain the to-be-trained set comprises:
data distribution from the training set
Figure 229635DEST_PATH_IMAGE006
Randomly sampling image data features
Figure 27827DEST_PATH_IMAGE007
According to the suggested distribution
Figure 757885DEST_PATH_IMAGE008
Compliance
Figure 3928DEST_PATH_IMAGE009
Sampling to obtain image data features
Figure 528450DEST_PATH_IMAGE010
Calculated using the following formulaFirst intermediate value
Figure 813938DEST_PATH_IMAGE011
Figure 82108DEST_PATH_IMAGE012
From
Figure 621805DEST_PATH_IMAGE013
Sampling in the uniform distribution to obtain a second intermediate value u;
judging whether the first intermediate value is greater than or equal to a second intermediate value;
if so, accepting the image data feature
Figure 582808DEST_PATH_IMAGE010
Into the set to be trained, i.e.
Figure 355592DEST_PATH_IMAGE014
If not, receiving image data characteristics
Figure 161874DEST_PATH_IMAGE007
Into the set to be trained, i.e.
Figure 382508DEST_PATH_IMAGE015
Repeating the above operations to characterize all image data
Figure 248833DEST_PATH_IMAGE007
Screening is carried out, and obedience data distribution is obtained after T rounds of iteration
Figure 243334DEST_PATH_IMAGE001
The set of image data features of;
carrying out reverse iteration on the image data feature set to obtain the to-be-trained set;
wherein,
Figure 604039DEST_PATH_IMAGE016
to represent
Figure 633175DEST_PATH_IMAGE010
The distribution of the data of (a) is,
Figure 935981DEST_PATH_IMAGE017
to represent
Figure 469642DEST_PATH_IMAGE007
The data distribution of (2).
5. The urban prevention and control image detection method based on asynchronous federal learning according to claim 1, wherein the step of training the local model by adopting an asynchronous federal calculation strategy to obtain an updated global model comprises the following steps:
the cloud server counts the number of local models participating in the global model training, and judges whether each local model participates in past training in a specified time window;
if not, the local model is not added with the global model training;
if the local model is judged to be the global model, corresponding weight is given to the local model, and global model training is added.
6. The city defense image detection method based on asynchronous federal learning as claimed in claim 5, wherein the determining whether each local model has been involved in past training in a specified time window comprises:
determining whether each local model has engaged in past training in a specified time window using the following formula:
Figure 883306DEST_PATH_IMAGE018
whereinAnd s is the number of rounds of the current global model update,
Figure 501369DEST_PATH_IMAGE019
for the number of rounds a local model has taken part in training,
Figure 975076DEST_PATH_IMAGE020
and the time window is a super parameter and is used for setting the local model time window of how many versions can be accommodated in the global model training.
7. The method for detecting the urban prevention and control images based on the asynchronous federated learning as recited in claim 5, wherein if the result of the determination is yes, the local model is given corresponding weight, and a global model training is added, comprising:
corresponding weight is given to the local model by adopting the following formula:
Figure 694901DEST_PATH_IMAGE021
and calculating the influence weight of each local model on the global model in the s-th round by adopting the following formula:
Figure 646677DEST_PATH_IMAGE022
wherein,
Figure 447143DEST_PATH_IMAGE023
for the amount of data in a local model that the same round participates in updating,
Figure 341018DEST_PATH_IMAGE024
the total amount of data for all local models participating in the update for the same round,
Figure 797407DEST_PATH_IMAGE025
is the weight value of the local model for the s-th round,
Figure 552874DEST_PATH_IMAGE026
the weight values of the global model for round s,
Figure 879950DEST_PATH_IMAGE027
for each set of local model codes for the s-th round,
Figure 446191DEST_PATH_IMAGE028
in order to obtain a learning rate,
Figure 389877DEST_PATH_IMAGE029
to determine the hyper-parameters that influence the decay rate of the weights.
8. An urban prevention and control image detection system based on asynchronous federal learning is characterized by comprising: the system comprises a cloud server and end devices;
the cloud server is used for initializing a global model; carrying out global model training on the local model by adopting an asynchronous federal calculation strategy to obtain an updated global model;
the end equipment is used for carrying out scene labeling on the urban prevention and control image data, dividing the urban prevention and control image data into a training set and a testing set, and screening out a data set which is distributed close to a sample of the testing set from the training set to serve as a to-be-trained set; acquiring the global model from a cloud server, and carrying out local training on the set to be trained to obtain a local model; and uploading the local model to a cloud server after homomorphic encryption.
9. The urban defense and control image detection system based on asynchronous federal learning according to claim 8, characterized in that: the end device is further used for locally training the marked urban prevention and control image data by adopting a Sparse R-CNN model.
10. The urban defense and control image detection system based on asynchronous federal learning according to claim 8, characterized in that:
the cloud server is used for counting the number of local models participating in the global model training and judging whether each local model participates in past training in a specified time window;
if not, the local model is not added with the global model training;
if the local model is judged to be the global model, corresponding weight is given to the local model, and global model training is added.
CN202111633287.8A 2021-12-29 2021-12-29 City prevention and control image detection method and system based on asynchronous federal learning Active CN113989627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111633287.8A CN113989627B (en) 2021-12-29 2021-12-29 City prevention and control image detection method and system based on asynchronous federal learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111633287.8A CN113989627B (en) 2021-12-29 2021-12-29 City prevention and control image detection method and system based on asynchronous federal learning

Publications (2)

Publication Number Publication Date
CN113989627A true CN113989627A (en) 2022-01-28
CN113989627B CN113989627B (en) 2022-05-27

Family

ID=79734845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111633287.8A Active CN113989627B (en) 2021-12-29 2021-12-29 City prevention and control image detection method and system based on asynchronous federal learning

Country Status (1)

Country Link
CN (1) CN113989627B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114694015A (en) * 2022-06-02 2022-07-01 深圳市万物云科技有限公司 General framework-based multi-task federal learning scene recognition method and related components
CN114726743A (en) * 2022-03-04 2022-07-08 重庆邮电大学 Service function chain deployment method based on federal reinforcement learning
CN115082903A (en) * 2022-08-24 2022-09-20 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium
EP4391472A1 (en) * 2022-12-22 2024-06-26 Ntt Docomo, Inc. Method for training a machine learning model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062215A (en) * 2019-12-10 2020-04-24 金蝶软件(中国)有限公司 Named entity recognition method and device based on semi-supervised learning training
CN111368886A (en) * 2020-02-25 2020-07-03 华南理工大学 Sample screening-based label-free vehicle picture classification method
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112232528A (en) * 2020-12-15 2021-01-15 之江实验室 Method and device for training federated learning model and federated learning system
CN112532451A (en) * 2020-11-30 2021-03-19 安徽工业大学 Layered federal learning method and device based on asynchronous communication, terminal equipment and storage medium
CN112686370A (en) * 2020-12-25 2021-04-20 深圳前海微众银行股份有限公司 Network structure search method, device, equipment, storage medium and program product
US20210158099A1 (en) * 2019-11-26 2021-05-27 International Business Machines Corporation Federated learning of clients

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210158099A1 (en) * 2019-11-26 2021-05-27 International Business Machines Corporation Federated learning of clients
CN111062215A (en) * 2019-12-10 2020-04-24 金蝶软件(中国)有限公司 Named entity recognition method and device based on semi-supervised learning training
CN111368886A (en) * 2020-02-25 2020-07-03 华南理工大学 Sample screening-based label-free vehicle picture classification method
CN111708640A (en) * 2020-06-23 2020-09-25 苏州联电能源发展有限公司 Edge calculation-oriented federal learning method and system
CN112532451A (en) * 2020-11-30 2021-03-19 安徽工业大学 Layered federal learning method and device based on asynchronous communication, terminal equipment and storage medium
CN112232528A (en) * 2020-12-15 2021-01-15 之江实验室 Method and device for training federated learning model and federated learning system
CN112686370A (en) * 2020-12-25 2021-04-20 深圳前海微众银行股份有限公司 Network structure search method, device, equipment, storage medium and program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANG CHEN ET AL.: "Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726743A (en) * 2022-03-04 2022-07-08 重庆邮电大学 Service function chain deployment method based on federal reinforcement learning
CN114726743B (en) * 2022-03-04 2024-07-23 北京北商西电科技有限公司 Service function chain deployment method based on federal reinforcement learning
CN114694015A (en) * 2022-06-02 2022-07-01 深圳市万物云科技有限公司 General framework-based multi-task federal learning scene recognition method and related components
CN115082903A (en) * 2022-08-24 2022-09-20 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium
CN115082903B (en) * 2022-08-24 2022-11-11 深圳市万物云科技有限公司 Non-motor vehicle illegal parking identification method and device, computer equipment and storage medium
EP4391472A1 (en) * 2022-12-22 2024-06-26 Ntt Docomo, Inc. Method for training a machine learning model
WO2024132259A1 (en) * 2022-12-22 2024-06-27 Ntt Docomo, Inc. Method for training a machine learning model

Also Published As

Publication number Publication date
CN113989627B (en) 2022-05-27

Similar Documents

Publication Publication Date Title
CN113989627B (en) City prevention and control image detection method and system based on asynchronous federal learning
CN110610197B (en) Method and device for mining difficult sample and training model and electronic equipment
CN109753928B (en) Method and device for identifying illegal buildings
CN113537172B (en) Crowd density determination method, device, equipment and storage medium
CN108388927A (en) Small sample polarization SAR terrain classification method based on the twin network of depth convolution
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
CN111210399B (en) Imaging quality evaluation method, device and equipment
CN110969215A (en) Clustering method and device, storage medium and electronic device
CN112770265B (en) Pedestrian identity information acquisition method, system, server and storage medium
CN109740479A (en) A kind of vehicle recognition methods, device, equipment and readable storage medium storing program for executing again
CN105208325A (en) Territorial resource monitoring and early warning method based on image fixed-point snapshot and comparative analysis
CN111723656B (en) Smog detection method and device based on YOLO v3 and self-optimization
CN113505643B (en) Method and related device for detecting violation target
CN110852164A (en) YOLOv 3-based method and system for automatically detecting illegal building
CN114360030A (en) Face recognition method based on convolutional neural network
CN113255590A (en) Defect detection model training method, defect detection method, device and system
CN111428653B (en) Pedestrian congestion state judging method, device, server and storage medium
Kong et al. Detecting type and size of road crack with the smartphone
CN116821777B (en) Novel basic mapping data integration method and system
Khosravi et al. Vehicle speed and dimensions estimation using on-road cameras by identifying popular vehicles
CN113094803B (en) Beacon equipment loss probability calculation method, device, equipment and storage medium
CN111680175B (en) Face database construction method, computer equipment and computer readable storage medium
CN111222370A (en) Case studying and judging method, system and device
Bhardwaj et al. Learning Pollution Maps from Mobile Phone Images.
Thakur et al. Evidence of long range dependence and self-similarity in urban traffic systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant