CN114398635A - Layered security federal learning method and device, electronic equipment and storage medium - Google Patents

Layered security federal learning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114398635A
CN114398635A CN202111444193.6A CN202111444193A CN114398635A CN 114398635 A CN114398635 A CN 114398635A CN 202111444193 A CN202111444193 A CN 202111444193A CN 114398635 A CN114398635 A CN 114398635A
Authority
CN
China
Prior art keywords
user
model
local
user behavior
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111444193.6A
Other languages
Chinese (zh)
Inventor
杨树杰
许长桥
王明泽
周赞
马腾超
丁中医
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202111444193.6A priority Critical patent/CN114398635A/en
Publication of CN114398635A publication Critical patent/CN114398635A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/56Computer malware detection or handling, e.g. anti-virus arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Virology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Storage Device Security (AREA)

Abstract

The invention provides a layered security federal learning method, a layered security federal learning device, electronic equipment and a storage medium, wherein the method comprises the following steps: issuing a global model to each local user, and indicating each local user to send the local model generated after the global model is trained to an intermediate layer; acquiring a user behavior identifier and a user behavior score which are generated after the intermediate layer carries out anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score; and determining a malicious user and carrying out prohibition according to the user behavior identification and the user behavior score. By using the method, the security and the reliability of privacy protection are improved in a mode of carrying out anonymous protection and anomaly detection on the privacy information carried by each local model through the intermediate layer.

Description

Layered security federal learning method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of federal machine learning, in particular to a layered safety federal learning method, a layered safety federal learning device, electronic equipment and a storage medium.
Background
With the breakthrough development of technologies such as internet of things, mobile communication and portable equipment, the data volume in the network also shows a sharp growth trend, most of information collected in mobile phones and portable equipment is related to personal privacy, and due to the frequent occurrence of privacy information leakage events, the awareness of people on privacy protection is continuously enhanced, and federal learning is in due charge and is actively adopted.
In the related technology, federal study hides individual private data in the process of training a global model and generating a local model locally, and uploads the local model carrying the hidden private data, so that the privacy is protected to a certain extent, but the problem of privacy disclosure in the process of a local uploading link also exists, and therefore the efficiency of privacy protection is low and the attack coping strength is weak.
Disclosure of Invention
The invention provides a layered safe federal learning method, a layered safe federal learning device, electronic equipment and a storage medium, which are used for solving the defect of privacy disclosure in a local uploading link caused by hiding personal privacy data in the process of locally training a global model in the federal learning in the prior art, and achieving the purpose of remarkably improving the privacy protection efficiency by introducing a mode of comprehensively protecting the privacy data through an intermediate layer.
The invention provides a layered safety federal learning method, which comprises the following steps:
issuing a global model to each local user, and indicating each local user to send the local model generated after the global model is trained to an intermediate layer;
acquiring a user behavior identifier and a user behavior score which are generated after the intermediate layer carries out anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score;
and determining a malicious user and carrying out prohibition according to the user behavior identification and the user behavior score.
According to the layered safety federal learning method provided by the invention, when the middle layer comprises at least two sub-middle layers, the global model is issued to each local user, and each local user is instructed to send the local model generated after the global model is trained to the middle layer, and the method comprises the following steps:
grouping all local users to obtain at least two user groups and at least two local users in each user group;
establishing a corresponding relation between the user group and the sub-middle layer;
and when the global model is issued to each local user, the local model carries the sub-middle layer identification of the local model generated after the local user trains the global model, so that each local user sends the local model generated by the local user to the corresponding sub-middle layer.
According to the layered safe federal learning method provided by the invention, the step of acquiring the user behavior identifier and the user behavior score generated after the intermediate layer carries out anonymous processing on each local model comprises the following steps:
acquiring a user behavior identifier and a user behavior score which are generated after the intermediate layer carries out anomaly detection processing on each anonymous model; the anonymous models comprise models generated after model identification shuffling processing, model content shuffling processing and disturbance adding of each local model.
According to the layered safety federal learning method provided by the invention, the step of determining and forbidding malicious users according to the user behavior identification and the user behavior score comprises the following steps:
correspondingly updating at least two user scores of each local user in the user group according to the user behavior identification and the user behavior scores to obtain at least two new user scores;
and when a target user score exceeding a preset user score threshold exists in the at least two new user scores, determining that the target user score corresponds to a malicious user, and executing a blocking process on the malicious user.
According to the layered safe federal learning method provided by the invention, after the step of correspondingly updating at least two user scores of each local user in the user group according to the user behavior identification and the user behavior score to obtain at least two new user scores, the method further comprises the following steps:
determining a weight of an anonymity model in a corresponding user grouping based on the user behavior score;
performing model aggregation operation according to the weight of the anonymous model to obtain a new global model, then returning to the step of issuing the global model to each local user, and instructing each local user to send the local model generated after the global model is trained to the middle layer to execute the next round of training;
and obtaining a global model which meets the preset precision requirement and meets the convergence condition.
According to the layered safety federal learning method provided by the invention, before the step of obtaining the global model which meets the preset precision requirement and meets the convergence condition, the method further comprises the following steps:
when the number of training rounds reaches M rounds, updating operation of corresponding user scores is executed according to new user behavior identifiers and new user behavior scores generated by the M round of training, and at least two new user scores of the M round are obtained; wherein M is a positive integer;
judging whether a target user score exceeding a preset user score threshold exists in the at least two new user scores;
and when the target user score exists in the at least two new user scores, determining that the target user score corresponds to a malicious user, and executing a blocking process on the malicious user.
According to the layered safety federal learning method provided by the invention, when the number of training rounds reaches N rounds, a new global model generated by the N-th round of training is subjected to precision test and convergence test; wherein N is a positive integer;
if the new global model generated by the N-th round of training meets the preset precision requirement and meets the preset convergence condition, determining the new global model generated by the N-th round of training as the converged global model meeting the preset precision requirement;
and if the new global model generated by the N-th round of training does not meet the preset precision requirement and/or does not meet the preset convergence condition, returning to the step of issuing the global model to each local user, and instructing each local user to send the local model generated after the global model is trained to the middle layer to execute the training.
The invention also provides a layered safety federal learning system, which comprises: the system comprises a central server, a plurality of local users and an intermediate layer; wherein:
the central server is used for issuing a global model to each local user and indicating each local user to send the local model generated after the global model is trained to the middle layer;
the local user is used for training the received global model to generate a local model and sending the local model to the middle layer;
the middle layer is used for generating user behavior identifiers and user behavior scores after anonymization processing is carried out on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score, and then synchronizes the user behavior identification and the user behavior score to the central server; and the central server is used for determining a malicious user and carrying out prohibition according to the user behavior identification and the user behavior score.
The invention also provides a layered safety federal learning device, which comprises:
the sending module is used for sending the global model to each local user and indicating each local user to send the local model generated after the global model is trained to the middle layer;
the acquisition module is used for acquiring the user behavior identification and the user behavior score which are generated after the intermediate layer carries out anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score;
and the processing module is used for determining the malicious user and carrying out prohibition according to the user behavior identifier and the user behavior score.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of any one of the above hierarchical safe federal learning methods.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the hierarchical secure federal learning method as any of the above.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of any of the above-described hierarchical secure federal learning methods.
According to the layered security federal learning method, the device, the electronic equipment and the storage medium, the layered security federal learning method solves the privacy leakage problem of a model uploading link in the existing federal learning by issuing a global model to each local user and indicating each local user to send the local model generated after the global model is trained to an intermediate layer instead of directly uploading the local model to a central server; further, after the central server obtains the user behavior identification and the user behavior score generated after the intermediate layer carries out anonymization processing on each local model, the central server determines malicious users and carries out prohibition based on the user behavior identification and the user behavior score. The anonymization processing executed by the middle layer comprises the steps of conducting model shuffling on each local model, adding disturbance to generate an anonymization model, conducting abnormality detection processing on each anonymization model to obtain the user behavior identification and the user behavior score of each anonymization model, conducting anonymization protection and abnormality detection on privacy information carried by each local model through the middle layer, achieving the purpose of improving the safety and reliability of privacy protection, determining malicious users according to the user behavior identification and the user behavior score through the central server, and executing the mode of sealing, achieving the purposes of fast positioning and sealing different malicious users, and achieving the purpose of efficiently responding to defense.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a hierarchical safe federated learning method provided by the present invention;
FIG. 2 is an interaction diagram of a hierarchical secure federated learning system provided by the present invention;
FIG. 3 is a schematic structural diagram of a hierarchical safe federated learning device provided by the present invention;
fig. 4 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a layered safety federal learning method, wherein the execution main body of the layered safety federal learning method is a central server in a layered safety federal learning system, the layered safety federal learning system comprises the central server, a plurality of local users and the central layer, and the central server can be a server with a user grouping function, a user numbering function, a global model aggregation function and a user score updating function. The present invention does not limit the specific form of the terminal device.
Fig. 1 is a schematic flow diagram of a layered security federal learning method provided in the present invention, and as shown in fig. 1, the layered security federal learning method includes:
and step 110, issuing the global model to each local user, and indicating each local user to send the local model generated after the global model is trained to the middle layer.
Specifically, the central server stores tables of corresponding relations between the local users and numbers thereof, and the central server can carry intermediate layer identifiers for receiving the local models when issuing the global model to each local user, so that each local user can further send the local models to the intermediate layer when training the received global model and then generating the local model.
And step 120, obtaining the user behavior identification and the user behavior score generated after the intermediate layer carries out anonymization processing on each local model.
The anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score. And, adding the perturbation may include adding differential privacy.
Specifically, when the number of local users is set to be P, and P is an integer greater than or equal to 2, and the intermediate layer receives the local models sent by each user, each local model may have a corresponding local user identifier, that is, the intermediate layer may determine the corresponding relationship between the local models and the local users at this time, the intermediate layer may generate anonymous models after performing model shuffling and disturbance addition between the local models, respectively, and at this time, the anonymous models do not have a corresponding relationship with the local users, but do not carry privacy information, and the number of the anonymous models is the same as the number of the local models; then further carrying out anomaly detection processing on each anonymous model to obtain an anomaly detection score of each anonymous model, and if the anomaly detection score is larger than 0, taking the maximum anomaly detection score in each anomaly detection score as the malicious score of each local user in the P local users and marking the malicious user identifications on the P local users; otherwise, if all the abnormal detection scores are equal to 0, each local user in the P local users is marked with a non-malicious user identifier, and the non-malicious score of each non-malicious user is 0. And then synchronizing the determined malicious users and the malicious scores to the central server.
And step 130, determining a malicious user according to the user behavior identification and the user behavior score and carrying out prohibition.
Specifically, the central server updates the user score of each local user in the P local users according to the received malicious user identifier and the malicious score, so that the user score of each local user in the P local users is updated, thereby obtaining P new user scores, compares the P new user scores with a preset user score threshold value respectively, further determines local users corresponding to the new user scores exceeding the preset user score threshold value as malicious users, and then executes a blocking process for the malicious users.
According to the layered safe federal learning method provided by the invention, the central server makes up the privacy leakage problem of the uploading link of the model in the existing federal learning by issuing the global model to each local user and indicating each local user to send the local model generated after training the global model to the middle layer instead of directly uploading the local model to the central server; further, after the central server obtains the user behavior identification and the user behavior score generated after the intermediate layer carries out anonymization processing on each local model, the central server determines malicious users and carries out prohibition based on the user behavior identification and the user behavior score. The anonymization processing executed by the middle layer comprises the steps of conducting model shuffling on each local model, adding disturbance to generate an anonymization model, conducting abnormality detection processing on each anonymization model to obtain the user behavior identification and the user behavior score of each anonymization model, conducting anonymization protection and abnormality detection on privacy information carried by each local model through the middle layer, achieving the purpose of improving the safety and reliability of privacy protection, determining malicious users according to the user behavior identification and the user behavior score through the central server, and executing the mode of sealing, achieving the purposes of fast positioning and sealing different malicious users, and achieving the purpose of efficiently responding to defense.
Optionally, when the middle layer includes at least two sub-middle layers, the implementation process of step 110 may include:
firstly, grouping all local users to obtain at least two user groups and at least two local users in each user group; then, establishing a corresponding relation between the user group and the sub-middle layer; and finally, when the global model is issued to each local user, the local model carries and receives the sub-middle layer identification of the local model generated after the local user trains the global model, so that each local user sends the local model generated by the local user to the corresponding sub-middle layer.
Specifically, the central server stores a correspondence table of correspondence between each local user and the user identifier thereof in advance, randomly groups the correspondence table, determines the number of user groups and the number of local users in each user group, establishes correspondence between each user group and the sub-middle layer, and carries the sub-middle layer identifier of the local model generated after receiving the training global model when issuing the global model to each local user. For example, when the number of local users is 100, 100 local users may be divided into 10 user groups, each user group corresponds to one sub-middle layer of the middle layer, that is, the user group 1 includes the sub-middle layer 1 corresponding to local users 1-10, the user group 2 includes the sub-middle layer corresponding to local users 11-20, …, the user group 10 includes the middle layer 10 corresponding to 91-100, therefore, the central server carries the identifier of the sub-middle layer 1 when issuing the global model to the local users 1-10, so that the local users 1-10 send the trained local model to the sub-middle layer 1, the central server carries the identifier of the sub-middle layer 1 when issuing the global model to the local users 11-20, so that the local users 11-20 send the trained local model to the sub-middle layer 2, …, the central server carries the identifier of the sub-middle layer 10 when issuing the global model to the local users 91-100, so that the local users 91-100 send the trained local models into the sub-middle layer 10.
It should be noted that, if each round of training is performed on the global model to obtain a converged global model with a standard accuracy, 100 local users may be randomly divided into 10 user groups for user grouping, each user group corresponds to one sub-middle layer, the next round of training is to perform user grouping again after shuffling the 100 local users, and the number of user groups performed in each round of training and the number of local users in each user group are not changed. For example, the sub-middle level 1 collects 10 local models of the local users 1-10 during the 1 st round of training, the sub-middle level 2 collects 10 local models of the local users 11-20, the sub-middle level 1 can collect 10 local models of the local users 15-24 during the 2 nd round of training, and the sub-middle level 2 collects 10 local models of the local users 53-62. Thus, each local user may be referred to as a mobile target based on the characteristics of the local user's movement within the different sub-middle tiers.
According to the layered safe federal learning method provided by the invention, the central server determines each user group, divides each user group into the corresponding sub-middle layers, and carries the corresponding sub-middle layer identification for receiving the local model when issuing the global model to the corresponding local user respectively, so that the purpose of quickly and reliably hiding information when the number of the local users is large is realized, and the flexibility and the high efficiency of protecting the private information of each local user are realized.
Optionally, the implementation process of step 120 may include:
firstly, obtaining anonymous models generated after the middle layer conducts shuffling processing among model identifications, shuffling processing among model contents and disturbance addition aiming at local models; and then, carrying out anomaly detection processing on each anonymous model to obtain a user behavior identifier and a user behavior score.
Specifically, when the number of local users is small, a small number of local users can be divided into a user group, and at this time, the intermediate layer can directly receive the local model generated after each local user trains the global model without dividing the sub-intermediate layer; when the number of the local users is large, a large number of the local users can be divided into a plurality of user groups, each user group corresponds to one sub-middle layer, and each sub-middle layer receives the local model uploaded by each local user in the corresponding user group. Then, the middle layer or each sub-middle layer carries out shuffling processing between model identifications, shuffling processing between model contents and disturbance adding on the received at least two local models, and then at least two anonymous models are generated, the number of the anonymous models generated in the middle layer or each sub-middle layer is the same as that of the received local models, but the anonymous models do not have corresponding relation with local users sending the local models and do not carry privacy information, namely, the anonymous models generated in the middle layer or each sub-middle layer have no practical significance at this time.
Then, the middle layer or each sub-middle layer performs anomaly detection processing on each anonymous model, wherein the anomaly detection processing can be executed by using the existing anomaly detection processing method, so as to obtain the user behavior identification and the user behavior score. The existing anomaly detection processing method may include one of an automatic encoder (AutoEncoder) algorithm, a Variational Automatic Encoder (VAE) algorithm, a Beta variational automatic encoder (Beta-VAE), a single-target generation antagonistic active learning (SO-GAAL) algorithm, a multi-target generation antagonistic active learning (MO-GAAL) algorithm, and a deep first-class classification (deep svdd) algorithm, and is preferably the VAE algorithm.
It should be noted that, the processing procedure of performing anomaly detection on each anonymous model by the middle layer or each sub-middle layer includes: and the intermediate layer or each sub-intermediate layer performs anomaly detection processing on each anonymous model contained in the intermediate layer or each sub-intermediate layer to obtain anomaly detection scores of all anonymous models. Then judging whether a group of abnormal detection scores larger than a preset malicious threshold exists in all the abnormal detection scores, if the preset malicious threshold is 0 and the group of the abnormal detection scores larger than 0 exists in all the abnormal detection scores is determined, if the middle layer only aims at one user group, marking all local users in the user group as malicious users and taking the maximum abnormal detection score in all the abnormal detection scores as the malicious score of each local user in the user group; if the middle layer is divided into a plurality of sub-middle layers and each sub-middle layer corresponds to one user group, aiming at the user group with the abnormal detection score larger than 0, selecting the maximum abnormal detection score in the user group, marking the maximum abnormal detection score as the malicious score of each local user in the user group, and simultaneously marking each local user in the user group as the malicious user.
For example, when the user group with the largest anomaly detection score includes user group 1, user group 3, and user group 5, the largest anomaly detection score of user group 1 is the malicious score of the local user in user group 1 and all local users in user group 1 are marked as malicious users, the largest anomaly detection score of user group 3 is the malicious score of the local user in user group 3 and all local users in user group 3 are marked as malicious users, and the largest anomaly detection score of user group 5 is the malicious score of the local user in user group 5 and all local users in user group 5 are marked as malicious users, and the obtained user groups 1, 3, and 5 marked as malicious users and the malicious scores thereof are synchronized to the central server.
On the contrary, when the preset malicious threshold is 0 and it is determined that no abnormal detection score group larger than 0 exists in all the abnormal detection scores, it can be considered that no malicious user exists, the corresponding user group is marked as a non-malicious user, the non-malicious scores of local users in the user group are all 0, and then the obtained user group marked as a non-malicious user and the non-malicious scores thereof are synchronized to the central server.
In addition, it should be noted that, when 1 malicious user exists in 100 local users, only one user group is marked as a malicious user, and the rest user groups are all marked as non-malicious users; when multiple malicious users exist in 100 local users, multiple user groups can be marked as malicious users, and the rest user groups are marked as non-malicious users
According to the layered safety federal learning method provided by the invention, the central server obtains the user behavior identification and the user behavior score which are generated after the middle layer carries out anomaly detection processing on each anonymous model. Because the anonymous model comprises models generated after the local models are subjected to shuffling processing among model identifications, shuffling processing among model contents and disturbance addition, the central server can quickly acquire whether malicious users exist in the local users or not based on the received user behavior identifications and the user behavior scores, and the security of federal learning privacy protection is greatly improved through a layered shuffling dynamic learning mechanism, so that a powerful basis is provided for accurately positioning the malicious users subsequently.
Optionally, the implementation process of step 130 may include:
firstly, correspondingly updating at least two user scores of each local user in a user group according to the user behavior identification and the user behavior scores to obtain at least two new user scores; then, when a target user score exceeding a preset user score threshold exists in the at least two new user scores, determining that the target user score corresponds to a malicious user, and executing a blocking process on the malicious user.
Specifically, the global model issued by the central server is a converged model with the standard precision, when the user behavior identifier and the user behavior score received by the central server include a malicious user identifier, a malicious score, a non-malicious user identifier and a non-malicious score, the user score of the local user with malicious activity can be updated correspondingly, the user score of the local user without malicious activity can be updated correspondingly, a plurality of new user scores are obtained, a corresponding relationship is provided between each new user score and each local user, therefore, whether a target user score exceeding a preset user score threshold value exists in the plurality of new user scores is judged, if a target user score exceeding the preset user score threshold value exists in the plurality of new user scores, the local user corresponding to the target user score is marked as a malicious user and the malicious user is marked as a forbidden user, and the weight of the local model trained by the forbidden user is marked as 0.
It should be noted that the malicious users can be classified into curious types, semi-malicious types and pure malicious types according to the malicious degree, and when the number of the malicious users is multiple, the malicious users are allowed to cooperate with each other, but the ratio of the malicious users cannot be too large, the ratio of the malicious users in all local users is usually much smaller than 1, and the malicious users not only can have strong background knowledge, but also can obtain any information required for completing the attack, and only cannot locate the data set of the local users. The malicious users can also be classified into confidentiality threat attack, availability threat attack and integrity threat attack, the confidentiality threat attack can represent that an attacker invades the privacy of the local users by recovering undisclosed data from the public local model and the public global model, and the undisclosed data can comprise sensitive information; usability threat attacks may characterize that an attacker may disable the global model in addition to data leakage; an integrity threat attack may characterize an attack similar to a backdoor attack or model inversion that manipulates the global model to an aggregated model that may be slightly modified to run an exception in several tasks while leaving the remaining major tasks unaffected.
According to the layered safety federal learning method provided by the invention, the central server executes the mode whether at least two new user scores obtained after the corresponding updating operation is executed on the basis of the user behavior identification and the user behavior score have the target user score exceeding the preset user score threshold value, so that the aims of quickly positioning the malicious user and executing the sealing-in are fulfilled on the premise that the global model is trained and converged, and the accuracy and the reliability of the sealing-in of the malicious user are improved.
Optionally, if the global model currently acquired by the central server is not a model whose accuracy meets the standard and is converged, the method may also be performed after the step of determining malicious users and performing containment and training the global model, that is, updating at least two user scores of each local user in the user group according to the user behavior identifier and the user behavior score to obtain at least two new user scores, where the step of updating at least two user scores of each local user in the user group according to the user behavior identifier and the user behavior score may further include:
firstly, determining the weight of an anonymous model in a corresponding user group based on the user behavior score; then, carrying out model aggregation operation according to the weight of the anonymous model to obtain a new global model, then returning to the step of issuing the global model to each local user, and instructing each local user to send the local model generated after the global model is trained to the middle layer to execute the next round of training; and obtaining a global model which meets the preset precision requirement and meets the convergence condition.
Specifically, the central server updates the user score of the corresponding local user with the malicious content according to the malicious user identifier and the malicious score synchronized by the middle layer, updates the user score of the corresponding local user with the non-malicious user identifier and the non-malicious score, marks the weight of each anonymous model in the malicious user as the reciprocal of the corresponding malicious score, and marks the weight of each anonymous model in the non-malicious user as 1. Since the malicious score is usually greater than 0, the purpose of de-authorizing malicious users can be achieved.
For example, when the user group 1 is marked as a malicious user, the user score of each local user in the user group 1 is updated by using the malicious score of the user group 1, and the weights of the 10 anonymous models of the user group 1 are the reciprocal of the malicious score of the user group 1; when the user group 2 is not marked as a malicious user (for example, the abnormal detection values of the 10 anonymity models in the user group 2 are all 0), the user score of each local user in the user group 2 is updated by using the normal detection score (for example, the normal detection score is 0) of the user group 2, and the weights of the 10 anonymity models of the user group 2 are all 1.
It should be noted that, the process of updating the user score of each local user by the central server includes: firstly, calculating a malicious value Err of an mth user group during the tth round of training, and then calculating an ith local user u in the mth user group during the tth round of trainingiTime-averaged intensity f of potential poisoning attack againstA(ui(ii) a t), and further calculating uiUser score V (u) in the t-th round of trainingiT), and finally u is calculatediNew user score obtained after updating in the t round of training
Figure BDA0003384288860000141
The formula involved is:
Figure BDA0003384288860000151
Figure BDA0003384288860000152
Figure BDA0003384288860000153
Figure BDA0003384288860000154
wherein the content of the first and second substances,
Figure BDA0003384288860000155
w (i', j) is the model to be detected in the mth user group during the tth round of training
Figure BDA0003384288860000156
The elements (A) and (B) in (B),
Figure BDA0003384288860000157
reconstructing the model to be detected in the mth user group in the t round training by an automatic encoderModel (model)
Figure BDA0003384288860000158
The elements (A) and (B) in (B),
Figure BDA0003384288860000159
grouping the mth user in the mth training round,
Figure BDA00033842888600001510
grouping all users during the t round of training, wherein N is the total number of anonymous models uploaded to a central server, and DiWeight of the ith anonymous model in the t round of training, DjThe weight of the jth anonymous model in the t round of training is taken as the weight; zeta is a preset protection parameter and the value of zeta can be 0.001; l is a preset weight adjustment coefficient, and the value of L may be 0.2. In this way, the weights of different models can be dynamically adjusted according to the user scores dynamically adjusted in the previous t-1 round, so that malicious users in the federal learning training can be quickly decided and positioned, and efficient coping defense can be completed.
According to the layered safety federal learning method provided by the invention, the central server obtains a new global model based on the weight of the anonymous model in the corresponding user group determined by the user behavior score, and executes the next training mode in a mode of issuing the new model to each local user, so that a converged global model meeting the preset precision requirement is obtained, and the reliability and the precision of the global model trained by the central server are improved.
Optionally, before the step of obtaining the global model meeting the preset precision requirement and meeting the convergence condition, the method may further include:
firstly, when the number of training rounds reaches M rounds, updating operation of corresponding user scores is executed according to new user behavior identifiers and new user behavior scores generated by the M round of training, and then at least two new user scores of the M round are obtained; wherein M is a positive integer; then, judging whether a target user score exceeding a preset user score threshold exists in the at least two new user scores; and when the target user score exists in the at least two new user scores, determining that the target user score corresponds to a malicious user, and executing a blocking process on the malicious user.
Specifically, in the training process of the global model, when the central server initially can issue the global model automatically generated by the federal learning framework to each local user for the first round of training and obtain a new global model, and then the new global model is sent to each local user for the second round of training, …, until the number of training rounds reaches M rounds, the central server may update the user score of the local user according to the new malicious score and the new malicious user identifier generated by the mth round of training, determine whether there are target user scores exceeding the preset user score for at least two new user scores of the mth round, and if so, the local user corresponding to the target user score is marked as a malicious user, the malicious user is subjected to the blocking processing, the local user subjected to the blocking processing can be marked as a blocking user, and the weight of the global model trained by the blocking user is further marked as 0.
For example, when the value of M is 50 and 50 rounds of training are performed, it is determined that the user score of the local user 2 in the user group 1 marked as a malicious user exceeds a preset user score threshold after being updated, at this time, the local user 2 is marked as a forbidden user, when the local user 2 marked as the forbidden user trains the global model to upload, the weight is set to 0, when the other local users except the local user 2 in the user group 1 are non-forbidden users and the non-forbidden user trains the global model to upload, the weight average is set to the reciprocal of the malicious score of the user group 1; and when the user score of each local user in the user group 2 does not exceed the preset user score after being updated, the weight of each local user in the local group 2 is set to be 1 when the global model is trained and uploaded.
According to the layered safe federated learning method provided by the invention, the central server judges whether the value exceeds the preset user value threshold value or not after updating the new user value obtained when the global model is trained for the preset turn, so that the purpose that the central server can train the global model which is not converged and has unqualified precision in the process of determining malicious users and executing the containment aiming at all local users is realized, namely the containment processing of the malicious users does not influence the training processing of the global model, the protection requirement of the federated learning on the private data of the local users in the training process can be met, the defense performance of the federated learning on various malicious attacks can be obviously improved, and the diversity and the flexibility of the central server are improved.
Optionally, the step of obtaining the converged global model that meets the preset precision requirement includes: when the number of training rounds reaches N rounds, carrying out precision test and convergence test on a new global model generated by the N round of training; wherein, N is a positive integer, and the value of M can be the same as or different from N; if the new global model generated by the N-th round of training meets the preset precision requirement and meets the preset convergence condition, determining the new global model generated by the N-th round of training as the converged global model meeting the preset precision requirement; and if the new global model generated by the N-th round of training does not meet the preset precision requirement and/or does not meet the preset convergence condition, returning to the step of issuing the global model to each local user, and instructing each local user to send the local model generated after the global model is trained to the middle layer to execute the training.
Specifically, when the central server determines that the current training round is the nth round, the central server can perform precision test and convergence test on a new global model generated by the nth round of training, and if the precision of the new global model meets a preset precision requirement and the loss value of the new global model reaches a preset loss threshold, the obtained new global model not only meets the precision standard and the model converges, but also can judge that the currently obtained new global model meets the requirement and the training is finished, and the trained global model can be directly used subsequently; on the contrary, if the precision of the new global model does not reach the preset precision requirement and/or the loss value does not reach the preset loss threshold, it indicates that the new global model obtained at this time is not trained well, that is, the training cannot be ended, and at this time, the new global model generated based on the N-th training may return to step 110 to perform the next N-round training. The preset loss threshold value can represent that the loss of the model reaches the minimum, the model does not have a descending trend and is stable, and the accuracy of the model can be represented to be high enough according to the preset precision requirement.
According to the layered safety federal learning method provided by the invention, the central server realizes the purpose of determining whether the trained global model is obtained or continuously trained by performing precision testing and convergence testing on the new global model generated by the Nth round of training, so that the accuracy and reliability of the global model trained by the central server are improved.
As shown in fig. 2, the present invention provides a layered security federal learning system, in fig. 2, a layered security federal learning system 200, including: a central server 210, a number of local users 220, and a middle tier 230; wherein:
the central server 210 is configured to issue a global model to each local user, and instruct each local user to send a local model generated after the global model is trained to the intermediate layer;
the local user 220 is used for training the received global model to generate a local model and sending the local model to the middle layer;
the middle layer 230 is used for anonymizing each local model to generate a user behavior identifier and a user behavior score; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score, and then synchronizes the user behavior identification and the user behavior score to the central server; and the central server 210 is configured to determine a malicious user according to the user behavior identifier and the user behavior score, and perform prohibition.
It should be noted that other functions performed by the central server 210, the local users 220, and the middle layer 230 may be referred to with the foregoing method embodiments, and are not described herein again.
The layered safety federal learning device provided by the invention is described below, and the layered safety federal learning device described below and the layered safety federal learning method described above can be correspondingly referred to.
As shown in fig. 3, the present invention provides a layered security federal learning device, and in fig. 3, the layered security federal learning device 300 includes:
a sending module 310, configured to send a global model to each local user, and instruct each local user to send a local model generated after the global model is trained to an intermediate layer;
an obtaining module 320, configured to obtain a user behavior identifier and a user behavior score, which are generated after the middle layer performs anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score;
and the processing module 330 is configured to determine a malicious user according to the user behavior identifier and the user behavior score, and perform prohibition.
Optionally, the sending module 310 may be specifically configured to perform grouping processing on all local users to obtain at least two user groups and at least two local users in each user group; establishing a corresponding relation between the user group and the sub-middle layer; and when the global model is issued to each local user, the local model carries the sub-middle layer identification of the local model generated after the local user trains the global model, so that each local user sends the local model generated by the local user to the corresponding sub-middle layer.
Optionally, the obtaining module 320 may be specifically configured to obtain a user behavior identifier and a user behavior score, which are generated after the middle layer performs anomaly detection processing on each anonymous model; the anonymous models comprise models generated after model identification shuffling processing, model content shuffling processing and disturbance adding of each local model.
Optionally, the processing module 330 may be specifically configured to correspondingly update at least two user scores of each local user in the user group according to the user behavior identifier and the user behavior score, so as to obtain at least two new user scores; and when a target user score exceeding a preset user score threshold exists in the at least two new user scores, determining that the target user score corresponds to a malicious user, and executing a blocking process on the malicious user.
Optionally, the processing module 330 may be further specifically configured to determine a weight of an anonymity model in the corresponding user group based on the user behavior score; performing model aggregation operation according to the weight of the anonymous model to obtain a new global model, then returning to the step of issuing the global model to each local user, and instructing each local user to send the local model generated after the global model is trained to the middle layer to execute the next round of training; and obtaining a global model which meets the preset precision requirement and meets the convergence condition.
Optionally, the processing module 330 may be further specifically configured to, when the number of training rounds reaches M rounds, perform an update operation of corresponding user scores according to a new user behavior identifier generated by an mth round of training and a new user behavior score, and obtain at least two new user scores of the mth round; wherein M is a positive integer; judging whether a target user score exceeding a preset user score threshold exists in the at least two new user scores; and when the target user score exists in the at least two new user scores, determining that the target user score corresponds to a malicious user, and executing a blocking process on the malicious user.
Optionally, the processing module 330 may be further configured to perform precision testing and convergence testing on a new global model generated by the nth training when it is determined that the number of training rounds reaches N rounds; wherein N is a positive integer; if the new global model generated by the N-th round of training meets the preset precision requirement and meets the preset convergence condition, determining the new global model generated by the N-th round of training as the converged global model meeting the preset precision requirement; and if the new global model generated by the N-th round of training does not meet the preset precision requirement and/or does not meet the preset convergence condition, returning to the step of issuing the global model to each local user, and instructing each local user to send the local model generated after the global model is trained to the middle layer to execute the training.
Fig. 4 illustrates a physical structure diagram of an electronic device, and as shown in fig. 4, the electronic device 400 may include: a processor (processor)410, a communication Interface 420, a memory (memory)430 and a communication bus 440, wherein the processor 410, the communication Interface 420 and the memory 430 communicate with each other via a communication bus 840. The processor 410 may invoke logic instructions in the memory 430 to perform a hierarchical secure federal learning method comprising:
issuing a global model to each local user, and indicating each local user to send the local model generated after the global model is trained to an intermediate layer;
acquiring a user behavior identifier and a user behavior score which are generated after the intermediate layer carries out anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score;
and determining a malicious user and carrying out prohibition according to the user behavior identification and the user behavior score.
In addition, the logic instructions in the memory 430 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product, which includes a computer program that can be stored on a non-transitory computer-readable storage medium, and when the computer program is executed by a processor, the computer can execute the hierarchical secure federal learning method provided by the above methods, and the method includes:
issuing a global model to each local user, and indicating each local user to send the local model generated after the global model is trained to an intermediate layer;
acquiring a user behavior identifier and a user behavior score which are generated after the intermediate layer carries out anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score;
and determining a malicious user and carrying out prohibition according to the user behavior identification and the user behavior score.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements a method for performing the hierarchical secure federal learning provided by the above methods, the method comprising:
issuing a global model to each local user, and indicating each local user to send the local model generated after the global model is trained to an intermediate layer;
acquiring a user behavior identifier and a user behavior score which are generated after the intermediate layer carries out anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score;
and determining a malicious user and carrying out prohibition according to the user behavior identification and the user behavior score.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A layered security federal learning method, comprising:
issuing a global model to each local user, and indicating each local user to send the local model generated after the global model is trained to an intermediate layer;
acquiring a user behavior identifier and a user behavior score which are generated after the intermediate layer carries out anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score;
and determining a malicious user and carrying out prohibition according to the user behavior identification and the user behavior score.
2. The method for hierarchical safe federal learning as claimed in claim 1, wherein when the middle layer includes at least two sub-middle layers, the step of issuing the global model to each local user and instructing each local user to send the local model generated after training the global model to the middle layer includes:
grouping all local users to obtain at least two user groups and at least two local users in each user group;
establishing a corresponding relation between the user group and the sub-middle layer;
and when the global model is issued to each local user, the local model carries the sub-middle layer identification of the local model generated after the local user trains the global model, so that each local user sends the local model generated by the local user to the corresponding sub-middle layer.
3. The method for hierarchical safe federal learning according to claim 1, wherein the obtaining of the user behavior identifiers and the user behavior scores generated by the middle layer after anonymization processing is performed on each local model comprises:
acquiring a user behavior identifier and a user behavior score which are generated after the intermediate layer carries out anomaly detection processing on each anonymous model; the anonymous models comprise models generated after model identification shuffling processing, model content shuffling processing and disturbance adding of each local model.
4. The layered security federal learning method as claimed in claim 1, wherein the step of determining and banning malicious users according to the user behavior identifiers and the user behavior scores comprises:
correspondingly updating at least two user scores of each local user in the user group according to the user behavior identification and the user behavior scores to obtain at least two new user scores;
and when a target user score exceeding a preset user score threshold exists in the at least two new user scores, determining that the target user score corresponds to a malicious user, and executing a blocking process on the malicious user.
5. The method for hierarchical safe federal learning as claimed in claim 4, wherein after the step of correspondingly updating at least two user scores of each local user in a user group according to the user behavior identifier and the user behavior score to obtain at least two new user scores, the method further comprises:
determining a weight of an anonymity model in a corresponding user grouping based on the user behavior score;
performing model aggregation operation according to the weight of the anonymous model to obtain a new global model, then returning to the step of issuing the global model to each local user, and instructing each local user to send the local model generated after the global model is trained to the middle layer to execute the next round of training;
and obtaining a global model which meets the preset precision requirement and meets the convergence condition.
6. The hierarchical safe federated learning method according to claim 5, wherein before the step until obtaining the global model that meets the preset accuracy requirement and meets the convergence condition, the method further comprises:
when the number of training rounds reaches M rounds, updating operation of corresponding user scores is executed according to new user behavior identifiers and new user behavior scores generated by the M round of training, and at least two new user scores of the M round are obtained; wherein M is a positive integer;
judging whether a target user score exceeding a preset user score threshold exists in the at least two new user scores;
and when the target user score exists in the at least two new user scores, determining that the target user score corresponds to a malicious user, and executing a blocking process on the malicious user.
7. The method for hierarchical safe federal learning as claimed in claim 5, wherein the step of obtaining the converged global model which meets the preset accuracy requirement comprises:
when the number of training rounds reaches N rounds, carrying out precision test and convergence test on a new global model generated by the N round of training; wherein N is a positive integer;
if the new global model generated by the N-th round of training meets the preset precision requirement and meets the preset convergence condition, determining the new global model generated by the N-th round of training as the converged global model meeting the preset precision requirement;
and if the new global model generated by the N-th round of training does not meet the preset precision requirement and/or does not meet the preset convergence condition, returning to the step of issuing the global model to each local user, and instructing each local user to send the local model generated after the global model is trained to the middle layer to execute the training.
8. A hierarchical secure federal learning system, comprising: the system comprises a central server, a plurality of local users and an intermediate layer; wherein:
the central server is used for issuing a global model to each local user and indicating each local user to send the local model generated after the global model is trained to the middle layer;
the local user is used for training the received global model to generate a local model and sending the local model to the middle layer;
the middle layer is used for generating user behavior identifiers and user behavior scores after anonymization processing is carried out on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score, and then synchronizes the user behavior identification and the user behavior score to the central server; and the central server is used for determining a malicious user and carrying out prohibition according to the user behavior identification and the user behavior score.
9. A hierarchical secure federal learning device, comprising:
the sending module is used for sending the global model to each local user and indicating each local user to send the local model generated after the global model is trained to the middle layer;
the acquisition module is used for acquiring the user behavior identification and the user behavior score which are generated after the intermediate layer carries out anonymization processing on each local model; the anonymization processing comprises the steps that the middle layer conducts model shuffling and disturbance adding on each local model to generate an anonymization model, and then conducts anomaly detection processing on the anonymization model to obtain the user behavior identification and the user behavior score;
and the processing module is used for determining the malicious user and carrying out prohibition according to the user behavior identifier and the user behavior score.
10. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the hierarchical secure federal learning method as claimed in any of claims 1 to 7.
11. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps of the hierarchical secure federal learning method as claimed in any of claims 1 to 7.
12. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the steps of the hierarchical secure federal learning method as claimed in any of claims 1 to 7.
CN202111444193.6A 2021-11-30 2021-11-30 Layered security federal learning method and device, electronic equipment and storage medium Pending CN114398635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111444193.6A CN114398635A (en) 2021-11-30 2021-11-30 Layered security federal learning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111444193.6A CN114398635A (en) 2021-11-30 2021-11-30 Layered security federal learning method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114398635A true CN114398635A (en) 2022-04-26

Family

ID=81225317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111444193.6A Pending CN114398635A (en) 2021-11-30 2021-11-30 Layered security federal learning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114398635A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049862A (en) * 2023-03-13 2023-05-02 杭州海康威视数字技术股份有限公司 Data protection method, device and system based on asynchronous packet federation learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049862A (en) * 2023-03-13 2023-05-02 杭州海康威视数字技术股份有限公司 Data protection method, device and system based on asynchronous packet federation learning
CN116049862B (en) * 2023-03-13 2023-06-27 杭州海康威视数字技术股份有限公司 Data protection method, device and system based on asynchronous packet federation learning

Similar Documents

Publication Publication Date Title
Al-Khater et al. Comprehensive review of cybercrime detection techniques
Meng Intrusion detection in the era of IoT: Building trust via traffic filtering and sampling
US10003607B1 (en) Automated detection of session-based access anomalies in a computer network through processing of session data
US9659185B2 (en) Method for detecting spammers and fake profiles in social networks
CN107579956B (en) User behavior detection method and device
CN107172022B (en) APT threat detection method and system based on intrusion path
CN104901971B (en) The method and apparatus that safety analysis is carried out to network behavior
Xiao et al. Secure mobile crowdsensing based on deep learning
CN106295349A (en) Risk Identification Method, identification device and the anti-Ore-controlling Role that account is stolen
CN114169010A (en) Edge privacy protection method based on federal learning
CN107231345A (en) Networks congestion control methods of risk assessment based on AHP
CN110210218A (en) A kind of method and relevant apparatus of viral diagnosis
CN108270723A (en) A kind of acquisition methods in electric power networks Forecast attack path
CN112600794A (en) Method for detecting GAN attack in combined deep learning
Sharma et al. WLI-FCM and artificial neural network based cloud intrusion detection system
Sood et al. Deep learning-based detection of fake task injection in mobile crowdsensing
US10419449B1 (en) Aggregating network sessions into meta-sessions for ranking and classification
CN114398635A (en) Layered security federal learning method and device, electronic equipment and storage medium
Wang et al. Botnet detection using social graph analysis
CN103593610B (en) Spyware self adaptation based on computer immunity induction and detection method
US10965696B1 (en) Evaluation of anomaly detection algorithms using impersonation data derived from user data
CN110874638A (en) Behavior analysis-oriented meta-knowledge federation method, device, electronic equipment and system
KR101576993B1 (en) Method and System for preventing Login ID theft using captcha
Dubey et al. Investigating the Impact of Feature Reduction through Information Gain and Correlation on the Performance of Error Back Propagation Based IDS
Luo Model design artificial intelligence and research of adaptive network intrusion detection and defense system using fuzzy logic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination