CN112085051B - Image classification method and system based on weighted voting and electronic equipment - Google Patents
Image classification method and system based on weighted voting and electronic equipment Download PDFInfo
- Publication number
- CN112085051B CN112085051B CN202010724451.5A CN202010724451A CN112085051B CN 112085051 B CN112085051 B CN 112085051B CN 202010724451 A CN202010724451 A CN 202010724451A CN 112085051 B CN112085051 B CN 112085051B
- Authority
- CN
- China
- Prior art keywords
- client
- network model
- few
- classification
- sample network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 230000004044 response Effects 0.000 claims abstract description 42
- 238000004364 calculation method Methods 0.000 claims abstract description 37
- 238000012549 training Methods 0.000 claims description 47
- 230000000977 initiatory effect Effects 0.000 claims description 4
- 238000010801 machine learning Methods 0.000 description 21
- 230000007246 mechanism Effects 0.000 description 17
- 238000004422 calculation algorithm Methods 0.000 description 15
- 238000013473 artificial intelligence Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 208000024891 symptom Diseases 0.000 description 4
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 208000030507 AIDS Diseases 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001010 compromised effect Effects 0.000 description 2
- 230000002349 favourable effect Effects 0.000 description 2
- 238000003909 pattern recognition Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Bioethics (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Probability & Statistics with Applications (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an image classification method, an image classification system and electronic equipment based on weighted voting. The method comprises the following steps: the method comprises the steps that a server side obtains images to be classified, and a judging request is initiated to each client side; each client feeds back a response signal to the server after judging the state parameters of the client according to the judging request; the server distributes the images to be classified to target clients which can participate in the classification task according to the response signals; inputting images to be classified into a few-sample network model trained in advance by each target client to classify, and obtaining a first classification result; the server side scores the few-sample network model of each target client side, performs weighted voting calculation based on the scoring result and the first classification result, and outputs a voting value; and the server side gathers and sorts the voting values and outputs a second classification result. And scoring the model of the client and performing weighted voting calculation on the first classification result output by the client to further improve the classification accuracy.
Description
Technical Field
The invention belongs to the technical field of machine learning, and particularly relates to an image classification method, an image classification system and electronic equipment based on weighted voting.
Background
Artificial intelligence has evolved very rapidly in recent years, but the lack of tag data and data privacy threats remains two challenges facing the field of artificial intelligence. On one hand, due to the value and sensitivity of the data, the data in most rows still exist in the form of islands, and the data is difficult to share due to the reasons of company profits or the view of protecting the privacy of users; on the other hand, the tag data required for machine learning is difficult to obtain, and the situation that the tag data is missing or the tag data is few is common; furthermore, an attacker may derive input data from some of the output data of a given model, and may even recover the data set used for the original training, thereby stealing the data and causing the private data to be compromised. Therefore, a model framework that needs a small amount of tag data and can effectively protect privacy data is urgently needed, so that the model framework can be applied to the artificial intelligence field with less tag data and high security requirements.
However, when the input label data is too small, there is a problem of model classification accuracy.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an image classification method, an image classification system and electronic equipment based on weighted voting. The technical problems to be solved by the invention are realized by the following technical scheme:
In a first aspect, an embodiment of the present invention provides an image classification method based on weighted voting, including:
the method comprises the steps that a server side obtains images to be classified, and a judging request for whether each client side can participate in a classification task or not is initiated;
each client judges the state parameters of the client according to the judging request and then feeds back response signals whether the client can participate in the classification task to the server;
the server distributes the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients;
inputting the images to be classified into a few-sample network model trained in advance by each target client to classify, and obtaining a first classification result; uploading the first classification result to the server; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model;
the server side scores the few-sample network models of all the target clients and outputs the scoring result of the few-sample network model of each target client; the server performs weighted voting calculation based on the scoring result of the few-sample network model of each target client and the first classification result, and outputs the voting value of the few-sample network model of each target client;
And the server side gathers and sorts the voting values of the few-sample network model of each target client side and outputs a second classification result.
Optionally, after the server side gathers and sorts the voting values of the few-sample network model of each target client side and outputs the second classification result, the method further includes:
and the server updates the scoring result of the few-sample network model of each target client according to the second classification result so as to be used for weighted voting calculation in the next round of classification.
Optionally, after the server side gathers and sorts the voting values of the few-sample network model of each target client side and outputs the second classification result, the method further includes:
and the server performs differential privacy protection on the second classification result and outputs a third classification result.
Optionally, the training method of the pre-trained few-sample network model includes:
the client downloads public data and integrates the public data and private data of the client to obtain model training parameters; the public data are stored in the service end or are independent of a public storage device outside the service end;
the client inputs the model training parameters into a few-sample network model of the client to generate the pre-trained few-sample network model.
In a second aspect, an embodiment of the present invention further provides an image classification method, applied to a server, where the method includes:
acquiring an image to be classified, and initiating a judging request for whether the client can participate in the classification task;
distributing the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients; the response signal is generated by the client according to the judging request;
receiving a first classification result; scoring the few-sample network model of each target client, and outputting the scoring result of the few-sample network model of each target client;
based on the scoring result of the few-sample network model of each target client and the first classification result, carrying out weighted voting calculation, and outputting the voting value of the few-sample network model of each target client;
summarizing and sorting voting values of the few-sample network model of each target client, and outputting a second classification result;
the first classification result is obtained by each target client inputting the images to be classified into each pre-trained few-sample network model; the small sample network model includes at least one of a small sample network model and a semi-supervised network model.
Optionally, after summarizing and sorting the voting values of the few-sample network model of each target client and outputting the second classification result, the method further includes:
and updating the scoring result of the few-sample network model of each target client according to the second classification result, so as to be used for weighted voting calculation in the next round of classification.
In a third aspect, the embodiment of the invention also provides an image classification method, which comprises a server side and a client side; wherein,
the method comprises the steps that a server side obtains images to be classified, and a judging request for whether each client side can participate in a classification task or not is initiated;
each client judges the state parameters of the client according to the judging request and then feeds back response signals whether the client can participate in the classification task to the server;
the server distributes the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients;
inputting the images to be classified into a few-sample network model trained in advance by each target client to classify, and obtaining a first classification result; uploading the first classification result to the server; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model;
The server side scores the few-sample network models of all the target clients and outputs the scoring result of the few-sample network model of each target client; the server performs weighted voting calculation based on the scoring result of the few-sample network model of each target client and the first classification result, and outputs the voting value of the few-sample network model of each target client;
and the server side gathers and sorts the voting values of the few-sample network model of each target client side and outputs a second classification result.
Optionally, after the server side gathers and sorts the voting values of the few-sample network model of each target client side and outputs the second classification result, the method further includes:
and the server updates the scoring result of the few-sample network model of each target client according to the second classification result so as to be used for weighted voting calculation in the next round of classification.
In a fourth aspect, an embodiment of the present invention further provides an image classification system, including: a server side and a client side, wherein,
the server acquires images to be classified, and initiates a judging request for whether each client can participate in the classification task;
each client judges the state parameters of the client according to the judging request and then feeds back response signals whether the client can participate in the classification task to the server;
The server distributes the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients;
inputting the images to be classified into a few-sample network model trained in advance by each target client to classify, and obtaining a first classification result; uploading the first classification result to the server; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model;
the server side scores the few-sample network models of all the target clients and outputs the scoring result of the few-sample network model of each target client; the server performs weighted voting calculation based on the scoring result of the few-sample network model of each target client and the first classification result, and outputs the voting value of the few-sample network model of each target client;
and the server side gathers and sorts the voting values of the few-sample network model of each target client side and outputs a second classification result.
In a fifth aspect, an embodiment of the present invention further provides an electronic device, including:
the judging module initiates a judging request for judging whether the client can participate in the classification task or not after the server acquires the image to be classified, and receives a response signal fed back by the client;
The storage module is used for acquiring the images to be classified and distributing the images to be classified to target clients which can participate in classification tasks according to response signals;
the scoring module scores the few-sample network models of the target clients and outputs scoring results of the few-sample network models of each target client; based on the scoring result and the first classification result of the few-sample network model of each target client, weighted voting calculation is carried out, and the voting value of the few-sample network model of each target client is output; the first classification result is obtained by each target client inputting the images to be classified into each pre-trained few-sample network model; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; the pre-trained few-sample network model is generated by training after the client inputs public data and private data of the client into the few-sample network model of the client; the public data are stored in a server or are independent of a public storage device outside the server;
the summarizing module is used for summarizing and collating voting values of the few-sample network model of each target client and outputting a second classification result;
Optionally, the electronic device further includes:
and the privacy module is used for carrying out differential privacy protection on the second classification result output by the summarizing module and outputting a third classification result.
Compared with the prior art, the invention has the beneficial effects that:
the scheme provided by the embodiment of the invention solves the problem that the data privacy in the existing machine learning is easy to be attacked and polluted maliciously and the problem that a large amount of label data is needed by utilizing a plurality of models of the client side which only needs a small amount of label data, and has good classification accuracy and classification confidence; in addition, the accuracy of classification is further improved by scoring the model of the client and performing weighted voting calculation on the first classification result output by the client.
Drawings
Fig. 1 is a schematic flow chart of an image classification method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an image classification method according to another embodiment of the present invention;
FIG. 3 is a flow chart of an image classification method according to another embodiment of the present invention;
FIG. 4 is a flow chart of an image classification method according to a further embodiment of the invention;
FIG. 5 is a flow chart of an image classification method according to another embodiment of the invention;
Fig. 6 is a schematic structural diagram of an image classification system according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a frame structure of an image classification system according to an embodiment of the present invention;
FIG. 8 is an electronic device for image classification provided by an embodiment of the invention;
FIG. 9 is another electronic device for image classification provided by an embodiment of the invention;
FIG. 10 is yet another electronic device for image classification provided by an embodiment of the invention;
fig. 11 is a diagram of experimental results of image classification accuracy provided by an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
In the following description, reference is made to "some embodiments" and "embodiments of the invention" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" and "embodiments of the invention" may be the same subset or different subsets of all possible embodiments and may be combined with each other without conflict. In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the invention described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the embodiments of the invention is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before describing embodiments of the present invention in further detail, the terms and terminology involved in the embodiments of the present invention will be described, and the terms and terminology involved in the embodiments of the present invention will be used in the following explanation.
1) Artificial intelligence (Artificial Intelligence, AI), is a theory, method, technique, and application system that simulates, extends, and extends human intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, obtains knowledge, and uses the knowledge to obtain optimal results.
2) Machine Learning (ML) is a method for implementing artificial intelligence. Machine learning is most basic in that algorithms are used to parse data, learn from it, and then make decisions and predictions about events in the real world. Unlike conventional hard-coded software programs that address specific tasks, machine learning is "training" with a large amount of data from which it is learned by various algorithms how to accomplish the task. Machine learning is directly derived from early artificial intelligence fields, and traditional algorithms include decision trees, clustering, bayesian classification, support vector machines, EM, adaboost, and the like. From the learning method, machine learning algorithms can be classified into supervised learning (e.g., classification problems), unsupervised learning (e.g., clustering problems), semi-supervised learning, ensemble learning, deep learning, and reinforcement learning.
3) Federal learning (Federated Learnin, FL), a machine learning framework, can effectively assist multiple institutions in data usage and machine learning modeling while meeting user privacy protection, data security, and government regulations. For example, assume that there are two different enterprises A and B, which have different data. For example, enterprise A has user characteristic data; enterprise B has product characteristic data and annotation data. The two enterprises cannot roughly merge the two-party data according to the above-described GDPR criteria because the original provider of the data, i.e., their respective users, may not agree to do so. Assuming that both parties each build a task model, each task can be a classification or prediction, and that these tasks also have the approval of their respective users when they obtain data, the problem is how to build high quality models at each of the ends a and B. Because the data is incomplete (e.g., enterprise a lacks label data, enterprise B lacks user characteristic data), or the data is insufficient (the amount of data is insufficient to build a good model), then the model at each end may not be built or the effect may be unsatisfactory. Federal learning is to solve this problem: it is hoped that the own data of each enterprise cannot be local, and then the federal system can establish a virtual common model through a parameter exchange mode under an encryption mechanism, namely under the condition of not violating the data privacy regulation. This virtual model is as if it were an optimal model built by aggregating data together. But when the virtual model is built, the data itself does not move, and privacy is not revealed and data compliance is not affected. In this way, the built models serve only local targets in the respective areas. Under such a federal mechanism, the identities and status of the individual participants are the same, and the federal system helps one build a "co-affluent" strategy.
4) Small sample Learning (Few-shot Learning) is an application of meta Learning in the field of supervision and Learning. In the meta-training stage, the data set is decomposed into different meta-tasks to learn the generalization capability of the model under the condition of category change, and in the meta-testing stage, the classification can be completed without changing the existing model. The problem solved by the small sample learning is that the machine learning model can learn quickly with a small amount of samples for a new class after learning a large amount of data for a certain class.
5) Semi-supervised learning (Semi-Supervised Learning, SSL), which is a key problem in the research of pattern recognition and machine learning fields, is a learning method combining supervised learning and unsupervised learning. Semi-supervised learning uses a large amount of unlabeled data and a small amount of labeled data simultaneously to perform pattern recognition. The basic idea of semi-supervised learning is to label unlabeled exemplars with a model hypothesis building learner on the data distribution. When semi-supervised learning is used, fewer people are required to do work, and meanwhile, higher accuracy can be brought.
6) A prototype network (Prototypical Networks) that can identify new classes that have never been seen during training and requires only a small amount of sample data for each class. The prototype network maps the sample data in each class into a space and extracts their "mean" to represent the prototype (prototype) of the class. Using Euclidean distance as a distance metric, training makes the data of this category closest to the original representation of this category and farther from the other original representations of this category. And during testing, the distances from the test data to the original data of each category are softmax to judge the category label of the test data.
7) Less sample network model the less sample network model of the present invention refers to a network model requiring less training samples with labels, such as: small sample network models, semi-supervised network models, etc. The small sample learning model further comprises a prototype Network (Prototypical Networks), a twin Network (Siamese Network), a matching Network (Match Network), and the like.
8) Differential privacy (Differential Privacy), a means in cryptography, aims to provide a way to maximize the accuracy of data queries when they are queried from a statistical database, while minimizing the chance of identifying their records.
Under the conditions of less sample data and data privacy threat faced by the artificial intelligence field, a federal learning method appears; the design purpose of the machine learning model is that a machine learning model can be built based on data sets distributed on a plurality of devices, and the machine learning model is developed with high efficiency among multiple participants or multiple computing nodes on the premise of guaranteeing information security during large data exchange, protecting terminal data and personal data privacy and guaranteeing legal compliance. Federal learning has the advantages that although the federal learning can be realized to a certain extent, a single mechanism is not required to have a large amount of data, and then a model is trained by combining data of a plurality of mechanisms, and data isolation can be realized, so that data leakage is avoided; there are some drawbacks such as: (1) The client of federal learning must use the same network model, with greater communication cost; (2) Each participating institution still requires a large amount of tagged data to complete model training; (3) If a user maliciously attacks and pollutes, the federal model is easily affected; (4) The uploading gradient can be restored to the data, the central server can steal the data of the user, and the like; (5) Federal learning can only utilize data of up to 100 participants, and there are limits to the number of participants.
Based on the above, the embodiment of the invention provides an image classification method, an image classification system and electronic equipment, which can realize the classification of the images to be classified on the basis of artificial intelligence and improve the accuracy of the classification of the images to be classified. In addition, the scheme provided by the embodiment of the invention relates to an artificial intelligence classification decision-making technology, such as model training for classification, classification by using a trained model and the like; in particular, this will be described below.
Referring to fig. 1, fig. 1 is a flowchart of an image classification method according to an embodiment of the present invention, and the detailed description is made with reference to the steps shown in fig. 1.
S101, a server acquires images to be classified, and initiates a judging request for whether each client can participate in a classification task.
It should be noted that, the server side of the embodiment of the present invention may be a central server, which is mainly used for acquiring the image to be classified input by the user, and transmitting the data to the target client side capable of participating in the classification task under a certain condition; and summarizing and sorting the preliminary classification results of the client and then outputting final classification results to the inquiring user. The client can be terminal equipment such as a mobile phone, a computer, a tablet and the like of a participant; in the system framework established by the invention, a plurality of clients are needed, and the clients of different participants can be the same machine learning model or different machine learning models; the client is mainly used for training a network model of the client according to the downloaded public data and private data of the client, and performing preliminary classification on images to be classified of a user distributed by the server by using the trained network model after the model is trained. Furthermore, the image to be classified may be a picture to be classified, such as a photograph, an X-ray film, a CT film, an MR film, or the like; or the images to be classified; the images to be classified may be output one at a time or may be input a plurality of at a time.
Specifically, each querier can input the images to be classified in the client of each querier, and the client of each querier is connected and communicated with the server, so that the server can acquire and temporarily store the images to be classified. After the server acquires the images to be classified, a judging request for judging whether the clients of the participants can participate in the classification task is initiated. The decision request is required to be sent because it is considered that there may be various situations where the classification task cannot be engaged, such as the clients of the individual participants being out of line, the device holder not agreeing to participate, the model not being trained, or the update not being completed.
S102, each client judges the state parameters of the client according to the judging request and feeds back response signals of whether the client can participate in the classification task to the server.
Specifically, after receiving a judging request sent by a server, a client of each participant combines state parameters of the client, for example, the client is online, a client holder agrees to participate in classification, a model is trained in advance, and updated client feedback can participate in response signals of classification tasks to the server, so that the clients can participate in the next classification task; otherwise, if only one client which does not meet the above conditions is provided, a response signal which cannot participate in the classification task is fed back to the server, which indicates that the clients cannot participate in the classification task in the next step. It should be noted that, the determination request may be just a response signal for testing whether the client has feedback.
S103, the server distributes the images to be classified to target clients which can participate in the classification task according to response signals fed back by the clients.
After receiving response signals which are fed back by each client and are on-line or not, the server distributes the images to be classified to the clients which can participate in the classification task at the time; for distinction, these clients are referred to as target clients.
S104, each target client inputs an image to be classified into a respective pre-trained few-sample network model for classification, and a first classification result is obtained; uploading the first classification result to a server; the small sample network model includes at least one of a small sample network model and a semi-supervised network model.
Each target client has own network model, and the network models are few sample network models with few training label samples, for example, a prototype network, a twin network or a matching network of the small sample network model, and also can be a semi-supervised network model; it may also be partly a small sample network model and partly a semi-supervised network model. That is, according to the network model of the participant in the image classification method of the embodiment of the present invention, different participants may use the same network less-sample network model, or may use different less-sample network models.
It should be noted that the embodiments of the present invention preferably employ different network models for clients of different participants. This is because, in general federal learning methods, if users attack and pollute maliciously, the overall model is easily affected; however, because the situation that the participants use the mobile device may have very different performances and different models are allowed to be used, the data of the participants can be fully utilized under the situation that the performance of the device is considered, the framework built by different network models of the clients of the different participants, which is preferable by the embodiment of the invention, is less likely to be attacked and polluted, namely is less likely to be subjected to privacy threat, and has higher security.
In addition, it should be noted that, the client specifically adopts a small sample network model, and the server and the client are required to agree in advance whether to adopt a prototype network in the small sample network model; for example, the network model of the client is agreed in advance to be the prototype network; or, other small sample networks than the prototype network; or other small sample networks and semi-supervised learning networks than the prototype network. It is necessary to agree in advance whether the network type of the client contains the prototype network or not because the input information at the time of the user's inquiry varies.
Specifically, when the network type of the client does not contain a prototype network, the information input by the user when the user needs to inquire only needs to input the image to be inquired, namely, at the moment, the image to be classified is the image to be inquired; when the network type of the client contains a prototype network, the information input by the user when the user needs to inquire comprises classified images besides the images to be inquired, and the classified images are images with definite known categories; at this time, the image to be classified includes the image to be queried and the category image. The prototype network also needs to input images of known categories when the user queries, because training the prototype network learning is a projection mode, and the images are projected into other spaces so as to minimize the distance between the similar images, thereby classifying the images. While the classified images provided by the inquirer are input simultaneously and used for determining each prototype (class center) after prototype network mapping, so that the classification is more accurate, and the classes which are not used in training can be classified. The known class images need only contain certain classes, both not necessarily large; and only a few images per class of known class are required. The specific input of which known category images by the user at the time of inquiry is to be judged according to the experience and the known information of the user. For example, the user is a doctor who knows what is a certain symptom of a certain disease based on the accumulation of experience and now wants to judge whether a certain person gets the disease, and then the doctor needs to input a symptom image of the disease, a normal symptom image and a symptom image of a person to be judged.
Based on the foregoing description, and in this step, each target client inputs the image to be classified into each pre-trained small sample network model to classify, so as to obtain a first classification result; namely:
when the few-sample network model of the client of the participant contains a prototype network, the image to be classified input by the user comprises an image to be queried and a category image; and each target client inputs the images to be queried and the category images into a pre-trained prototype network model for classification, and a first classification result is obtained.
When the few-sample network model of the client of the participant does not contain a prototype network (for example, contains a twin network and/or a matching network and/or a semi-supervised learning network), the image to be classified, namely, the image to be queried, which is input by the user; and each target client inputs the image to be queried into a pre-trained prototype network model for classification, and a first classification result is obtained.
In addition, when the type of the network model with few samples of the client is a prototype network, the input class images may have a problem of inconsistent formats, and may also have a problem of non-uniform storage modes, for example, all the different class images are placed under the same folder according to the name order and the different class images are respectively placed in different secondary folders, which are two different storage modes. Therefore, the images to be classified are preprocessed before being input into the respective pre-trained few-sample network models for classification, and the images to be classified are convenient to classify in the next step in a unified format type and storage mode.
Further, the pre-trained few-sample network model related in the step is that the network model of each target client is pre-trained by adopting training samples before classification tasks are carried out; the training samples comprise public data and private data of the client; the training is finished once, and then the classification task is executed each time to directly input the images to be classified. The training method comprises the following steps:
the client downloads the public data and integrates the public data and private data of the client to obtain model training parameters; the public data are stored on the server side or a public storage device which is independent of the server side;
the client inputs the model training parameters into the less-sample network model of the client to generate a pre-trained less-sample network model.
It should be noted that the participant uses two parts of data when training the model, one part is public data, the other part is private data of the participant, and the data are all data with known labels. The way public and private data is trained is used because public data sets can increase the amount of data for individual participants, which helps to obtain a more accurate model.
S105, the server side scores the few-sample network models of all the target clients and outputs the scoring result of the few-sample network model of each target client; the server performs weighted voting calculation based on the scoring result and the first classification result of the few-sample network model of each target client, and outputs the voting value of the few-sample network model of each target client.
The server gives each target client an initial scoring value of 1; the server starts a scoring function when receiving the first classification result (voting result) uploaded by each target client, scores the few-sample network model of each target client, and updates once when each round of classification is completed. And scoring the few-sample network model of each target client, namely predicting the probability of the correct classification result obtained in the round of classification task of the few-sample network model of each target client.
In particular, the model used for scoring is a dynamic Bayesian network model (Dynamic Bayesian Network, DBN) which is capable of learning the probability dependence relationship between variables and the law of the probability dependence relationship with time, and is an emerging statistical model. Scoring a few sample network of a certain target client (i.e. predicted probability of obtaining a correct classification result) by using a dynamic Bayesian network model, and multiplying the voting result (0 or 1) of the few sample network of the target client for a certain class as a final voting value; this process may also be referred to as a weighted voting technique.
The server is required to score the small sample network model of each target client because the probability of the correct classification result output by each target client will be different due to the difference of the models. According to the scheme provided by the embodiment of the invention, the final classification result is more accurate by scoring the few-sample network model of each target client and introducing a weighted voting technology.
S106, the server gathers and sorts the voting values of the few-sample network models of each target client, and outputs a second classification result.
After scoring and weighted voting calculation in the steps, the obtained voting value is actually the voting result of which category the image to be classified specifically belongs to by the network model of each target client; the server calculates the number of votes of each class, and the total number of votes of each class is obtained after summarizing. The total number of votes of each category can be transmitted by the server and displayed on the clients of the respective inquirers, and the inquirers consider the category with more total number of votes as the required correct classification result.
The image classification method provided by the embodiment of the invention solves the problem that the data privacy in the existing machine learning is easy to be attacked and polluted maliciously and the problem that a large amount of label data is required by utilizing the models of a plurality of clients which only need a small amount of label data, and has good classification accuracy and classification confidence; in addition, the accuracy of classification is further improved by scoring the model of the client and performing weighted voting calculation on the first classification result output by the client.
According to the scheme of the embodiment of the invention, the idea of federal learning is utilized, and when massive data is lack as sample data, a virtual integral network model frame is constructed by utilizing the network models of a plurality of clients, so that the problem of actual classification is solved; however, in clear contrast to federal learning, federal learning requires that all participants use the same model, and if the models of the participants are different, training cannot be completed; however, the same model is used in federal learning, and a malicious user continuously and directionally interferes with the model, so that a common virtual model is changed, and the model framework is polluted by attacks. Furthermore, due to data specificity problems such as independent and uniform distribution of data, federal learning is difficult to apply to a large number of participants, and cannot be performed more than 100 at most. The scheme of the embodiment of the invention adopts a mode that the data of each participant is scattered and small in quantity, the number of the participants is not limited, more participant models can be effectively utilized, more data owners are allowed to join, and the method and the system of the invention are wider in practicability; more importantly, the models of the participants can be different, and the different models of the participants are more beneficial to protecting the constructed framework from being easily attacked and polluted by an attacker; the privacy protection of the participants is facilitated; and for different schemes, participants in a single task are allowed to use different models, which is more flexible. In addition, according to the scheme of the embodiment of the invention, the images to be classified are classified and voted through the few-sample network models of the plurality of target clients, and the network parameters are not uploaded, so that malicious models and data inference can be further prevented.
Referring to fig. 2, fig. 2 is a flowchart of an image classification method according to another embodiment of the present invention, and based on the above scheme provided by the embodiment of the present invention, further, step S106 includes, after the server side gathers and sorts the voting values of the few-sample network model of each target client side, outputting a second classification result, further including:
and S107, the server updates the scoring result of the few-sample network model of each target client according to the second classification result so as to be used for weighted voting calculation in the next round of classification.
The second classification result obtained after scoring and weighted voting calculation is a more accurate classification result, and the server updates the score of the less-sample network model of each target client based on the more accurate classification result.
According to the scheme of the embodiment, on the basis of the embodiment, the score of the small sample network model of each target client is updated by adopting the more accurate classification result after the weighted voting calculation, so that the voting value obtained after the weighted voting calculation is more accurate when the next round of classification task is carried out, and the final classification result is more accurate. Thus, the classification result is more and more accurate after multiple classification tasks.
Referring to fig. 3, fig. 3 is a flowchart of an image classification method according to another embodiment of the present invention, and based on the above scheme provided by the embodiment of the present invention, further, after the server side in step S106 gathers and sorts the first classification results of each target client side and outputs the second classification result, the method further preferably includes the steps of:
s108, the server performs differential privacy protection on the second classification result and outputs a third classification result.
And the server outputs a third classification result after differential privacy protection to the inquirer.
It should be noted that, after the second classification result is output, but no sequence exists in both step S107 and step S108, the scoring result of the few-sample model of each target client is updated according to the second classification result, or the second classification result is subjected to differential privacy protection, or the scoring result updating process and the differential privacy protection process are synchronously performed.
The differential privacy protection mechanism is introduced because the classification results of all the participants are summarized in the framework constructed by each participant, and when the participants are numerous, the framework has strong privacy protection due to the randomness of the online time of the participants; however, when there are few participants, individual participants may receive targeted attacks, resulting in compromised participant privacy. The introduced differential privacy may hide whether a participant is involved in a query.
Differential privacy is described below:
provided with a random algorithm M, P M Probability of forming a set for all possible outputs of M, for any two adjacent data sets D and D' and P M Any subset S of (2) M If the algorithm M satisfies the following conditions:
P[M(D)∈S M ]≤e ε ×P[M(D′)∈S M ]
the algorithm M provides epsilon-differential privacy protection. The smaller epsilon the higher the degree of privacy protection; the larger epsilon the higher the data availability (lower the confidentiality). Typically, epsilon values are small, such as 0.001, 0.1, ln2, ln3, etc., i.e. for two datasets with only one record difference, they satisfy differential privacy preservation if the probabilities of querying them are very close. For example, 10 persons have AIDS in the hospital release information, and an attacker now knows 9 persons of the information, and by comparing the information with the information released by the hospital, the attacker can know whether the last person has AIDS, which is a differential privacy attack. If the information of query 9 persons and the information of query 10 persons are consistent, then the attacker has no way to determine the information of 10 th person, which is differential privacy protection.
The embodiment of the invention introduces an index differential privacy protection mechanism, and the index differential privacy protection mechanism is described below:
The output domain of the query function is set as Range, each value R epsilon Range in the domain is set as an entity object, and under the exponential mechanism, the function q (N, R) to R is called as the availability function of the R output value, and the availability function is used for evaluating the quality degree of the R output value.
Let the random algorithm M input as data set N, output as a physical object r E Range, q (N, r) as availability function, Δq as sensitivity of function q (N, r), if algorithm M is proportional toR is selected from Range and output, then algorithm M provides epsilon-privacy protection.
Specifically, in this step, the server performs the above-mentioned exponential differential privacy protection on the summarized and consolidated first classification result, where the first classification result is an input data set of the algorithm M, the output r is a second classification result, and the algorithm m= { is proportional to all possible output valuesReturns the probability of the entity object r. And the server sends the output value r after the exponential difference privacy protection to the inquirer.
By introducing an index differential privacy protection mechanism, the output classification result can be effectively protected from being subjected to reverse attack by an attacker through restoring data, and the information of the specific user can be protected from being subjected to reverse attack by the attacker; and the user data can not be divulged because of the threat of possibly stealing the user data in the server.
The beneficial effects of the solution of the invention are further illustrated below by analysing the drawbacks of the prior art close to the solution of the invention.
The technical scheme of the document Differentially Private Federated Learning: A Client Level Perspective is as follows: under the framework of federal learning, each round of iteration selects part of participant devices, the data of the participant devices are utilized to update a model locally, and after model parameters are uploaded, a central server performs weighted average on the parameters by utilizing a differential privacy algorithm. But has the disadvantages that: regardless of the fact that the server is a potential attacker, the server can recover training data of individual participants through gradients; the problem of device heterogeneity cannot be solved, and parameters uploaded by devices with slow running speeds may not be used, or training time may be greatly prolonged.
The technical scheme of the document Practical Secure Aggregation for Privacy-Preserving Machine Learning is as follows: the clients encrypt their own data by using a secure aggregation protocol, and the server can only decrypt the sum of the encrypted data and cannot decrypt a single encrypted data, thereby protecting the privacy of the user. But has the disadvantages that: if the client side deliberately sends a message with wrong format, the whole process is terminated; a malicious client may send any value of its own choice, affecting the final model.
In the image classification method of the embodiment of the invention, as the models of the participants can be different, the different models of the participants are more favorable for protecting the constructed frame from being easily attacked and polluted by an attacker, and the server cannot recover the training data of a single participant through gradient, thereby being more favorable for protecting the privacy of the participants; in addition, the problem of equipment heterogeneity does not exist; and for different schemes, participants in a single task are allowed to use different models, which is more flexible. Even if a malicious client deliberately sends a message with a wrong format, the model system constructed as a whole is not affected.
The image classification method according to the embodiment of the present invention is described in detail above in connection with fig. 1 to 3. An image classification method according to still another embodiment of the present invention will be described in detail with reference to fig. 4.
It will be appreciated that the service-to-client interaction procedure described from the client side is the same as that described for both sides in the method shown in fig. 1 or fig. 2, and the relevant description is omitted as appropriate to avoid repetition.
Referring to fig. 4, fig. 4 is a flowchart of an image classification method according to another embodiment of the present invention, where the method is applied to a client, and specifically includes:
S201, a client receives a judging request of whether the client can participate in the classification task, judges the state parameters of the client and then feeds back a response signal of whether the client can participate in the classification task to a server; a request for determining whether to participate in the classification task is initiated by the server.
S202, receiving an image to be classified, inputting the image to be classified into a pre-trained few-sample network model for classification, and outputting a first classification result; uploading the first classification result to a server; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; and distributing the images to be classified by the server according to the response signals.
The training method of the pre-trained few-sample network model comprises the following steps of:
the client downloads the public data, integrates the public data and private data of the client, and obtains model training parameters; the public data are stored on the server side or a public storage device which is independent of the server side;
the client inputs the model training parameters into the less-sample network model of the client to generate a pre-trained less-sample network model.
The specific training process of the model is the same as that of the above embodiment, and will not be described here again.
According to the image classification method applied to the client, the clients of a plurality of participants are required, and the clients train the models by utilizing private data and external public data, so that the trained models can be classified more accurately; and the client preferably uses a few-sample network model with more participants, so that the built framework is less prone to attack and pollution, namely is less prone to being threatened by privacy, and has higher security.
Referring to fig. 5, fig. 5 is a flowchart illustrating an image classification method according to another embodiment of the invention. It will also be appreciated that the service-to-client interaction procedure described from the service side is the same as the two-sided description in the method shown in fig. 1-3, and the relevant description is omitted as appropriate to avoid repetition.
The image classification method shown in fig. 5 is applied to a server, and specifically includes:
s301, acquiring an image to be classified, and initiating a judging request for whether the client can participate in the classification task.
S302, distributing the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients; the response signal is generated by the client in response to the determination request.
S303, receiving a first classification result, grading the few-sample network model of each target client, and outputting the grading result of the few-sample network model of each target client; the first classification result is obtained by inputting the image to be classified into a pre-trained few-sample network model by each target client; the small sample network model includes at least one of a small sample network model and a semi-supervised network model.
S304, based on the scoring result and the first classification result of the few-sample network model of each target client, weighted voting calculation is carried out, and the voting value of the few-sample network model of each target client is output. The first classification result is obtained by inputting the images to be classified into the respective pre-trained few-sample network models by each target client; the small sample network model includes at least one of a small sample network model and a semi-supervised network model.
S305, summarizing and sorting voting values of the few-sample network models of each target client, and outputting a second classification result. Further, after summarizing and sorting the voting values of the few-sample network model of each target client and outputting the second classification result, the method further comprises:
and S306, updating the grading result of the few-sample network model of each target client according to the second classification result, so as to be used for weighted voting calculation in the next round of classification.
Further, after summarizing and sorting the voting values of the few-sample network model of each target client and outputting the second classification result, the method further comprises:
s307, differential privacy protection is carried out on the second classification result, and a third classification result is output.
Similarly, both the step S306 and the step S307 are performed after the second classification result is output, but no sequence exists, and the scoring result of the few-sample model of each target client is updated according to the second classification result, or the differential privacy protection is performed on the second classification result, or the scoring result updating process and the differential privacy protection process are performed synchronously. According to the image classification method applied to the server, the server receives the classification results uploaded by the target clients, calculates the number of votes of each type, gathers the votes to obtain the total number of votes of each type, and feeds the total number of votes of each type back to the inquirer; the server side scores the model of the client side and performs weighted voting calculation on the first classification result output by the client side, so that classification accuracy is further improved. In addition, by introducing an exponential differential privacy protection mechanism, the mechanism is used for outputting a voting result with the highest probability, so that the output classification result can be effectively protected from being subjected to reverse attack by an attacker through restoring data, and the information of the specific user can be protected from being subjected to reverse attack by the attacker; and the user data can not be divulged because of the threat of possibly stealing the user data in the server.
On the basis of the foregoing embodiments, the present invention further provides an image classification method, which may be implemented by the system, and it may be understood that the interaction procedure between the server and the client of the system is the same as the description of both sides in the method shown in fig. 1 to 3, and the related description is omitted appropriately to avoid repetition.
Referring to fig. 6 and fig. 7, fig. 6 is a schematic structural diagram of an image classification system according to an embodiment of the present invention; fig. 7 is a schematic frame structure of an image classification system according to an embodiment of the present invention.
The image classification system provided by the embodiment of the invention comprises: the system comprises a server and a client; wherein,
the method comprises the steps that a server side obtains images to be classified, and a judging request for whether each client side can participate in a classification task or not is initiated; each client judges the state parameters of the client according to the judging request and then feeds back response signals whether the client can participate in the classification task to the server; the server distributes the images to be classified to target clients which can participate in the classification task according to response signals fed back by the clients; inputting images to be classified into a few-sample network model trained in advance by each target client to classify, and obtaining a first classification result; uploading the first classification result to a server; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; the server side scores the few-sample network models of the target clients and outputs the scoring result of the few-sample network model of each target client; the server performs weighted voting calculation based on the scoring result and the first classification result of the few-sample network model of each target client, and outputs the voting value of the few-sample network model of each target client; and the server side gathers and sorts the voting values of the few-sample network model of each target client side and outputs a second classification result.
Also, in order to make the classification result more accurate, a weighted voting technology is introduced; specifically, after the server side gathers and sorts the voting values of the few-sample network models of each target client side and outputs the second classification result, the server side updates the scoring result of the few-sample network model of each target client side according to the second classification result so as to be used for weighted voting calculation in the next round of classification.
And in order to prevent the data of the participant from being attacked maliciously, an index differential privacy protection mechanism is introduced, and the server side performs differential privacy protection on the second classification result and outputs a third classification result.
The image classification method provided by the embodiment of the invention solves the problem that the data privacy in the existing machine learning is easy to be attacked and polluted maliciously and the problem that a large amount of label data is needed by utilizing the models of a plurality of clients which only need a small amount of label data; and scoring the model of the client and performing weighted voting calculation on the first classification result output by the client to further improve the classification accuracy. In addition, by introducing a differential privacy protection mechanism, the output classification result can be effectively protected from being subjected to reverse attack by an attacker through restoring data, and the information of the specific user can be protected from being subjected to reverse attack by the attacker; the user data can not be divulged due to the threat that the server possibly steals the user data; and has good classification accuracy and classification confidence.
On the basis of the foregoing embodiment, the embodiment of the present invention further provides an electronic device, corresponding to the server of the foregoing system, where the interaction process between the electronic device and the client is the same as the method of fig. 5, and for avoiding repetition, relevant descriptions are omitted appropriately.
Referring to fig. 7 and fig. 8 together, fig. 7 is a schematic diagram of a frame structure of an image classification system according to an embodiment of the present invention, and fig. 8 is an electronic device for image classification according to an embodiment of the present invention, where the electronic device shown in fig. 8 includes:
the response module receives a judging request of whether the classification task can be participated or not, and feeds back a response signal of whether the classification task can be participated or not to the server after judging the state parameter of the response module; a request for determining whether to participate in the classification task is initiated by the server.
The data processing module is used for downloading public data, integrating the public data and private data of the client to obtain model training parameters; the public data are stored on the server side or a public storage device which is independent of the server side; the data processing module also receives data to be classified from the server side; and the data to be classified is distributed by the server according to the corresponding signals fed back by the response module.
The computing module receives the image to be classified, inputs the image to be classified into a pre-trained few-sample network model for classification and outputs a first classification result; uploading the first classification result to a server; the pre-trained few-sample network model is obtained by a calculation module based on the model training parameters to train the few-sample network model of the client; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; and distributing the images to be classified by the server according to the response signals.
On the basis of the foregoing embodiments, the embodiments of the present invention further provide an electronic device, corresponding to the client of the foregoing system, where an interaction process between the electronic device and the client is the same as the method of fig. 3, and for avoiding repetition, a related description is omitted appropriately.
Referring to fig. 7 and fig. 9 together, fig. 9 is another electronic device for image classification according to an embodiment of the present invention, including:
and the judging module is used for initiating a judging request for judging whether the client can participate in the classification task or not after the server acquires the image to be classified and receiving a response signal fed back by the client.
The storage module acquires the images to be classified and distributes the images to be classified to target clients which can participate in the classification task according to the response signals.
The scoring module scores the few-sample network models of the target clients and outputs scoring results of the few-sample network models of each target client; based on the scoring result and the first classification result of the few-sample network model of each target client, weighted voting calculation is carried out, and the voting value of the few-sample network model of each target client is output; the first classification result is obtained by inputting the images to be classified into the respective pre-trained few-sample network models by each target client; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; the pre-trained few-sample network model is generated by training after the client inputs public data and private data of the client into the few-sample network model of the client; the public data are stored on the server side or a public storage device which is independent of the server side;
and the summarizing module is used for summarizing and collating the voting values of the few-sample network model of each target client and outputting a second classification result.
Referring to fig. 10, fig. 10 is another electronic device for image classification according to an embodiment of the present invention, where the electronic device shown in fig. 10 corresponds to the electronic device of the service end of the foregoing system, and further includes: and the privacy module is used for carrying out differential privacy protection on the second classification result and outputting a third classification result.
Further, an updating module can be further included to update the scoring result of the less-sample network model of each target client according to the second classification result for weighting voting calculation in the next round of classification.
The following describes in detail an image classification method, an image classification system and an algorithm framework used by electronic equipment according to the embodiments of the present invention:
let P be the number of online participants; mod (mode) i Is the model trained by the ith participant, where i e [0, P); c is the number of all possible categories of the classification task; c j Is the j class, j e [0, C);is the probability that the ith participant participates in the kth voting to obtain the correct result, x is the picture to be classified,/for the user>A value of 1; n is n j Is the sum of the total number of votes in class j.
(1) Each participant i trains its own model Mod using the public data set and its own private data i ;
(2) The inquirer sends an inquiry request to the central server to execute classification tasks;
(3) The central server determines participants capable of executing the tasks and transmits the classified tasks to the participants;
(4) Participant i uses its own model Mod i Performing classification and uploading classification result c j ;
(5) The central server gives each participant an initial score value of 1 and scores participant i, using the score and classification result c j Calculating the total number n of votes of each class j List n= [ N ] 0 ,n 1 ,...,n j ]The method comprises the steps of carrying out a first treatment on the surface of the And updating the score of participant i;
(6) For list N, the central server uses an exponential differential privacy mechanism M (N) = { to be proportional among all possible output valuesReturning the probability of the entity object r to obtain a final voting result c;
(7) The central server returns the classification results to the inquirer.
The invention will be described in connection with a practical scenario.
Assuming that models used by clients of the participants are prototype network models, the prototype network models of the plurality of participants perform classification calculation on queries initiated by the querier. The known category image set of the inquirer is as followsWherein->Feature vector representing the ith sample in the kth class image, S k Represent class k, |S k I represents the number of samples in class k; the image to be queried is xq. After receiving the data distributed by the server, participant a first uses the projection function f learned by the model a (x) Projecting all images into its projection space, and then calculating prototype ++for each class in that space>Then calculate the distance d (f) of xq from each prototype in this projection space a (xq),c k ) Finding the prototype with the shortest distance is the category to which the prototype belongs and uploading the category to the server. The other participants are the same. After the server receives the votes of all the participants, a scoring mechanism is started to score the prototype network model of each participant, and weighted voting calculation is carried out based on the scoring result and the voting result to obtain the total number n of votes of each class j List n= [ N ] 0 ,n 1 ,...,n j ]And finally, obtaining a final voting result by using an exponential differential privacy algorithm M (N) and returning the final voting result to a inquirer.
The present invention is experimentally verified for classification accuracy and classification confidence.
First a prototype network is trained for each participant, and the accuracy and privacy of the final classification is largely dependent on the number of participants.
The experiments used MNIST and Omniglot datasets. For two data sets, the prototype network employed stacks four convolutional layers with maximum pool and activation functions.
Classification accuracy: all other things being equal, the accuracy of classification is limited by the number of training classes and the number of training samples used to train the model. Clearly, the more training samples, the higher the accuracy of the classification. And an experiment was performed on the Omniglot dataset, with 20 samples per character in the Omniglot dataset, the number of categories of public data was set to four times the number of categories of private data, and the number of characters trained in each model was altered to observe the change in accuracy. Experiments on Omniglot data sets show that the more the total number of categories of training samples used, the higher the accuracy of classification, as shown in fig. 11, the accuracy of classification is almost the same when the number of training sample categories used is close to the number of training sample categories for centralized learning. It should be noted that, the graph is an experimental graph of classification results without a scoring mechanism, and the accuracy of classification is higher than the result shown in fig. 11 after the scoring mechanism is introduced.
Classification confidence: in order to protect the privacy of the classification results obtained by a group of participants, a certain number of participants are required to vote on the same tag. Our privacy analysis reflects this observation, which provides a more stringent privacy window when the number of participants is high enough. We count the number of votes for each possible tag and measure the difference in number of votes between the most popular tag and the second most popular tag. If the variance is large, the probability of obtaining the label with the largest number of votes is still high when noise is introduced into the aggregation result. By measuring the difference of normalization of the total number of participants P, the experimenter finds that the difference of different voting results on the same picture is still larger than 60% of participants along with the increase of P, so that the classification method and the classification system of the invention are proved to be very likely to output correct labels under the condition of noise.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.
Claims (7)
1. An image classification method based on weighted voting, comprising:
the method comprises the steps that a server side obtains images to be classified, and a judging request for whether each client side can participate in a classification task or not is initiated;
each client judges the state parameters of the client according to the judging request and then feeds back response signals whether the client can participate in the classification task to the server;
the server distributes the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients;
inputting the images to be classified into a few-sample network model trained in advance by each target client to classify, and obtaining a first classification result; uploading the first classification result to the server; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; the training method of the pre-trained few-sample network model comprises the following steps: the client downloads public data and integrates the public data and private data of the client to obtain model training parameters; the public data are stored in the service end or are independent of a public storage device outside the service end; the client inputs the model training parameters into a few-sample network model of the client, and the pre-trained few-sample network model is generated;
The server side scores the few-sample network models of all the target clients and outputs the scoring result of the few-sample network model of each target client; the server performs weighted voting calculation based on the scoring result of the few-sample network model of each target client and the first classification result, and outputs the voting value of the few-sample network model of each target client;
and the server side gathers and sorts the voting values of the few-sample network model of each target client side and outputs a second classification result.
2. The image classification method according to claim 1, wherein after the server performs a summary arrangement on the voting values of the few-sample network model of each target client, the method further comprises:
and the server updates the scoring result of the few-sample network model of each target client according to the second classification result so as to be used for weighted voting calculation in the next round of classification.
3. The image classification method according to claim 1, wherein after the server performs a summary arrangement on the voting values of the few-sample network model of each target client, the method further comprises:
And the server performs differential privacy protection on the second classification result and outputs a third classification result.
4. An image classification method, which is characterized by being applied to a server, the method comprising:
acquiring an image to be classified, and initiating a judging request for whether the client can participate in the classification task;
distributing the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients; the response signal is generated by the client according to the judging request;
receiving a first classification result; scoring the few-sample network model of each target client, and outputting the scoring result of the few-sample network model of each target client;
based on the scoring result of the few-sample network model of each target client and the first classification result, carrying out weighted voting calculation, and outputting the voting value of the few-sample network model of each target client;
summarizing and sorting voting values of the few-sample network model of each target client, and outputting a second classification result;
updating the scoring result of the few-sample network model of each target client according to the second classification result so as to be used for weighted voting calculation in the next round of classification;
The first classification result is obtained by each target client inputting the images to be classified into each pre-trained few-sample network model; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; the pre-trained few-sample network model is generated by training after the client inputs public data and private data of the client into the few-sample network model of the client.
5. An image classification method is characterized by comprising a server and a client; the method comprises the steps that a server side obtains images to be classified, and a judging request for whether each client side can participate in a classification task is initiated;
each client judges the state parameters of the client according to the judging request and then feeds back response signals whether the client can participate in the classification task to the server;
the server distributes the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients;
inputting the images to be classified into a few-sample network model trained in advance by each target client to classify, and obtaining a first classification result; uploading the first classification result to the server; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; the pre-trained few-sample network model is generated by training after the client inputs public data and private data of the client into the few-sample network model of the client;
The server side scores the few-sample network models of all the target clients and outputs the scoring result of the few-sample network model of each target client; the server performs weighted voting calculation based on the scoring result of the few-sample network model of each target client and the first classification result, and outputs the voting value of the few-sample network model of each target client;
the server side gathers and sorts voting values of the few-sample network model of each target client side and outputs a second classification result;
and the server updates the scoring result of the few-sample network model of each target client according to the second classification result so as to be used for weighted voting calculation in the next round of classification.
6. An image classification system, comprising: the system comprises a server side and client sides, wherein the server side acquires images to be classified, and initiates a judging request for whether each client side can participate in a classification task;
each client judges the state parameters of the client according to the judging request and then feeds back response signals whether the client can participate in the classification task to the server;
the server distributes the images to be classified to target clients which can participate in classification tasks according to response signals fed back by the clients;
Inputting the images to be classified into a few-sample network model trained in advance by each target client to classify, and obtaining a first classification result; uploading the first classification result to the server; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; the pre-trained few-sample network model is generated by training after the client inputs public data and private data of the client into the few-sample network model of the client;
the server side scores the few-sample network models of all the target clients and outputs the scoring result of the few-sample network model of each target client; the server performs weighted voting calculation based on the scoring result of the few-sample network model of each target client and the first classification result, and outputs the voting value of the few-sample network model of each target client;
the server side gathers and sorts voting values of the few-sample network model of each target client side and outputs a second classification result;
and the privacy module is used for carrying out differential privacy protection on the second classification result output by the summarizing module and outputting a third classification result.
7. An electronic device, comprising:
the judging module initiates a judging request for judging whether the client can participate in the classification task or not after the server acquires the image to be classified, and receives a response signal fed back by the client;
the storage module is used for acquiring the images to be classified and distributing the images to be classified to target clients which can participate in classification tasks according to response signals;
the scoring module scores the few-sample network models of the target clients and outputs scoring results of the few-sample network models of each target client; based on the scoring result and the first classification result of the few-sample network model of each target client, weighted voting calculation is carried out, and the voting value of the few-sample network model of each target client is output; the first classification result is obtained by each target client inputting the images to be classified into each pre-trained few-sample network model; the small sample network model comprises at least one of a small sample network model and a semi-supervised network model; the pre-trained few-sample network model is generated by training after the client inputs public data and private data of the client into the few-sample network model of the client; the public data are stored in a server or are independent of a public storage device outside the server;
And the summarizing module is used for summarizing and collating the voting values of the few-sample network model of each target client and outputting a second classification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010724451.5A CN112085051B (en) | 2020-07-24 | 2020-07-24 | Image classification method and system based on weighted voting and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010724451.5A CN112085051B (en) | 2020-07-24 | 2020-07-24 | Image classification method and system based on weighted voting and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112085051A CN112085051A (en) | 2020-12-15 |
CN112085051B true CN112085051B (en) | 2024-02-09 |
Family
ID=73735608
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010724451.5A Active CN112085051B (en) | 2020-07-24 | 2020-07-24 | Image classification method and system based on weighted voting and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112085051B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113033820B (en) * | 2021-03-25 | 2023-05-26 | 蚂蚁金服(杭州)网络技术有限公司 | Federal learning method, device and equipment |
CN113177595B (en) * | 2021-04-29 | 2022-07-12 | 北京明朝万达科技股份有限公司 | Document classification model construction, training and testing method and model construction system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108038448A (en) * | 2017-12-13 | 2018-05-15 | 河南理工大学 | Semi-supervised random forest Hyperspectral Remote Sensing Imagery Classification method based on weighted entropy |
WO2019050247A2 (en) * | 2017-09-08 | 2019-03-14 | 삼성전자 주식회사 | Neural network learning method and device for recognizing class |
CN109657697A (en) * | 2018-11-16 | 2019-04-19 | 中山大学 | Classified optimization method based on semi-supervised learning and fine granularity feature learning |
CN110533106A (en) * | 2019-08-30 | 2019-12-03 | 腾讯科技(深圳)有限公司 | Image classification processing method, device and storage medium |
CN111310938A (en) * | 2020-02-10 | 2020-06-19 | 深圳前海微众银行股份有限公司 | Semi-supervision-based horizontal federal learning optimization method, equipment and storage medium |
-
2020
- 2020-07-24 CN CN202010724451.5A patent/CN112085051B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019050247A2 (en) * | 2017-09-08 | 2019-03-14 | 삼성전자 주식회사 | Neural network learning method and device for recognizing class |
CN108038448A (en) * | 2017-12-13 | 2018-05-15 | 河南理工大学 | Semi-supervised random forest Hyperspectral Remote Sensing Imagery Classification method based on weighted entropy |
CN109657697A (en) * | 2018-11-16 | 2019-04-19 | 中山大学 | Classified optimization method based on semi-supervised learning and fine granularity feature learning |
CN110533106A (en) * | 2019-08-30 | 2019-12-03 | 腾讯科技(深圳)有限公司 | Image classification processing method, device and storage medium |
CN111310938A (en) * | 2020-02-10 | 2020-06-19 | 深圳前海微众银行股份有限公司 | Semi-supervision-based horizontal federal learning optimization method, equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
刘振宇 ; 李钦富 ; 杨硕 ; 邓应强 ; 刘芬 ; 赖新明 ; 白雪珂 ; .一种基于主动学习和多种监督学习的情感分析模型.中国电子科学研究院学报.2020,(第02期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN112085051A (en) | 2020-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112101403B (en) | Classification method and system based on federal few-sample network model and electronic equipment | |
CN112101404B (en) | Image classification method and system based on generation countermeasure network and electronic equipment | |
Qi et al. | Model aggregation techniques in federated learning: A comprehensive survey | |
Wang et al. | Social sensing: building reliable systems on unreliable data | |
CN111402095A (en) | Method for detecting student behaviors and psychology based on homomorphic encrypted federated learning | |
Suri et al. | Formalizing and estimating distribution inference risks | |
Yu et al. | A survey on federated learning in data mining | |
CN108964926A (en) | User trust negotiation establishing method based on two-layer block chain in heterogeneous alliance system | |
Gao et al. | FGFL: A blockchain-based fair incentive governor for Federated Learning | |
CN110084377A (en) | Method and apparatus for constructing decision tree | |
CN115102763B (en) | Multi-domain DDoS attack detection method and device based on trusted federal learning | |
Nguyen et al. | Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions | |
WO2023071626A1 (en) | Federated learning method and apparatus, and device, storage medium and product | |
CN112085051B (en) | Image classification method and system based on weighted voting and electronic equipment | |
Haffar et al. | Explaining predictions and attacks in federated learning via random forests | |
Doku et al. | On the blockchain-based decentralized data sharing for event based encryption to combat adversarial attacks | |
Hao et al. | Waffle: Weight anonymized factorization for federated learning | |
Smahi et al. | BV-ICVs: A privacy-preserving and verifiable federated learning framework for V2X environments using blockchain and zkSNARKs | |
CN117521151B (en) | Block chain-based decentralization federation learning data sharing method | |
Ratnayake et al. | A review of federated learning: taxonomy, privacy and future directions | |
Hu et al. | Source inference attacks: Beyond membership inference attacks in federated learning | |
Tran et al. | A comprehensive survey and taxonomy on privacy-preserving deep learning | |
Buyukates et al. | Proof-of-Contribution-Based Design for Collaborative Machine Learning on Blockchain | |
Hu et al. | An overview of implementing security and privacy in federated learning | |
Chen et al. | Advances in Robust Federated Learning: Heterogeneity Considerations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |