CN110008696A - A kind of user data Rebuilding Attack method towards the study of depth federation - Google Patents
A kind of user data Rebuilding Attack method towards the study of depth federation Download PDFInfo
- Publication number
- CN110008696A CN110008696A CN201910249056.3A CN201910249056A CN110008696A CN 110008696 A CN110008696 A CN 110008696A CN 201910249056 A CN201910249056 A CN 201910249056A CN 110008696 A CN110008696 A CN 110008696A
- Authority
- CN
- China
- Prior art keywords
- user
- model
- data
- study
- federation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/552—Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computer Security & Cryptography (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Hardware Design (AREA)
- Bioethics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a kind of user data Rebuilding Attack methods towards the study of depth federation, it compares previous attack method and can only rebuild classification and represent data, this method can rebuild the private data of specific user, and considers to implement to attack by malicious service end, be negatively affected so as to avoid being introduced to former Share Model.In addition, this method, which describes a kind of multitask, generates the distribution that confrontation model carrys out analog subscriber data, which is trained by the differentiation of authenticity, classification and owning user identity to input sample, promotes the sample quality of generation.In order to preferably distinguish different user, this method introduces a kind of user data based on optimization and represents calculation method to portray the user characteristics for participating in federal study, to prefect at the training of confrontation model.For the federal study framework of existing concern secret protection, the data reconstruction attack proposed by the present invention for generating confrontation model based on multitask can cause it to reveal privacy.
Description
Technical field
The present invention relates to a kind of user data Rebuilding Attack methods towards the study of depth federation, belong to artificial intelligence safety
Field.
Background technique
In recent years, depth learning technology is increasingly used in network field, such as knows that the study of perception is appointed in conjunction with group
Business.Crowdsourcing data are being locally stored using traditional centralization training method needs, this would generally bring large-capacity data transmission,
The problems such as high computational requirements and privacy compromise.Therefore, the mobile edge calculations frame as a deep learning, synergetics
It practises frame to receive significant attention and study, multiparty data source co-benefit can be made to be total in what is obtained by the training of all data
Model is enjoyed, and data need not be uploaded in central store.
Federal learning framework is one of Cooperative Study frame of current mainstream: each data source (i.e. user) just learns first
Target and model framework are reached an agreement, and server-side initializes a shared learning model, and users are locally with respective hidden
The private data set training learning model, server-side are collected model/parameter after training and are updated to update globally shared model.Then
World model is issued to each user again, this local training process of iteration terminates until training.The advantage of federation's study
It is, avoids explicit access of the center service end to training data, improve the secret protection degree of deep learning.
Recent studies have shown that Cooperative Study is subject to by inference attack and leak data privacy, as Rebuilding Attack and at
Member's inference attack.This is because the update of Share Model is derived from these private data collection, data pattern is " encoded " as ginseng
Number, which updates, to be uploaded, as soon as therefore construct corresponding " decoder " as attacker, private data can be resumed.Existing attacker
Method usually considers that malicious user implements attack to a target, thus it is speculated that its data is typically only capable to deduce classification generation with privacy
Table data, such as a face identification mission, existing attack method can speculate the general sample of some corresponding ID, but nothing
Method reconstructs the sample for the ID that specific user is possessed, and the general sample of even now can characterize such another characteristic, but
It actually can not really cause privacy compromise.This is because updated model can only be accessed in malicious user from server-side,
This is to polymerize to obtain by the last round of update of total user, can not implement attack for specific objective.In addition, existing attacker
Method requires the structure of modification Share Model to implement to attack, and malice influence is introduced into conventional Cooperative Study process, this
The hypothesis of sample has exceeded the attacking ability of a malicious user, and will cause the reduced performance of Share Model, is easy by system
Discover, that is, the concealment attacked is poor.
It is considered herein that the existing data reconstruction attack method towards federation's study is established in too strong attacker's ability
It on assuming that, can not implement in actual scene, and orientation, valuable privacy information cannot be inferred, therefore expedite the emergence of out
A kind of data reconstruction attack for user class private data based on malicious service end.
Summary of the invention
The purpose of the present invention is overcome the deficiencies of the prior art and provide a kind of user that confrontation network is generated based on multitask
Data reconstruction attack method.
User data Rebuilding Attack method towards the study of depth federation comprises the following steps:
1) malicious service end participates in conventional federal learning process, and user first reaches the target of Cooperative Study, model
Unanimously, then iteration executes: server-side issues Share Model, and user locally trains this model and model parameter is uploaded to clothes
Business end, server-side polymerize the update of these parameters again.Until model is restrained;
2) malicious service end is locally constructing a multitask generation confrontation network model, which includes a generation mould
A type G and discrimination model D, the differentiation that wherein D is performed simultaneously the authenticity to input sample, classification and owning user identity are appointed
Business, G is for synthesizing user data;
3) parameter that malicious service end is submitted according to user, which is more newly arrived, calculates the data representative of different user, user data generation
Table is used to supervise the sample owning user identity task training of D model;
4) malicious service end is more newly arrived using the parameter of target user and other users in every wheel iterative process and substitutes D
The training of the category task of model;
5) malicious service end utilizes local additional real data set, the false data collection of G synthesis, the sample of Lai Xunlian D model
This authenticity differentiates task.By the dual training of G and D, G is enabled to generate the private data of specific user.
A kind of user data Rebuilding Attack method towards the study of depth federation, malicious service end is considered as
" honest and curious ", this shows that attacker can go to execute conventional learning tasks according to the rule of federation's study.Specifically,
It is assumed that N number of user respectively possesses a private data collection, these data are not usually independent same distribution in actual scene.User
To the target of federation's study, after model structure reaches an agreement, distributed training is realized via malicious service end.Specifically,
The update of Share Model each round can be expressed as
MtIndicate that t takes turns updated Share Model,It indicates that parameter of the t wheel from user k updates, is existed by user k
Locally by private data to MtIt is calculated.
A kind of user data Rebuilding Attack method towards the study of depth federation, attacker establish one in server-side
A multitask generates confrontation network model, and model includes an a generation model G and discrimination model D.D is used to complete three
Task: 1) the true and false differentiation task of sample;2) classification task of sample;3) the owning user identity of sample differentiates task.It compares
Confrontation network model is generated in standard, the present invention extraly differentiates identity as condition entry to G model, it is therefore an objective to generate special
Determine the privacy sample of user.
A kind of user data Rebuilding Attack method towards the study of depth federation, the structure of D model can indicate
For
Dreal=Sigmoid (FCreal(Lshare))
Dcat=Sigmoid (FCcat(Lshare))
Did=Sigmoid (FCid(Lshare))
Dreal, Dcat, DidRespectively indicate three above-mentioned tasks, FCreal, FCcatAnd FCidIt is corresponding to respectively indicate three tasks
The full articulamentum of neural network, LshareIndicate the layer structure in addition to the last layer of Share Model.Sigmoid is activation primitive.
A kind of user data Rebuilding Attack method towards the study of depth federation, G model receive classification and user
Identity label and random noise synthesize the sample of specific user.The training that multitask generates confrontation network can be expressed as
X~pvicttmWith x~potherThe sample sampled with victim user and other users is respectively indicated, CE indicates cross entropy
Loss function.Attacker can not directly acquire the private data of user, each use of approximate substitution of more newly arriving uploaded by user
The data at family represent.
A kind of user data Rebuilding Attack method towards the study of depth federation, attacker uploads according to user to join
The data of number building different user represent, data represent can line flag be: it is total as private data training when using data to represent
When enjoying model, parameter similar with true private data can be generated and updated, be expressed as
Indicate Share Model MtParameter, γ be scale factor be used to balanceMagnitude, XkIndicate different user
Data represent.
A kind of user data Rebuilding Attack method towards the study of depth federation, in order to avoid the calculation based on optimization
Method introduces excessive noise in calculating user data representative, and local difference summation is added into as regular terms, is expressed as
xijIndicate pixel value of the image x at (i, j).The distance of the adjacent pixels of image is calculated, realizes and generates figure
Picture it is smooth
A kind of user data Rebuilding Attack method towards the study of depth federation, attacker is in each round to G, D
The training of model is expressed as
η1And η2Indicate learning rate, θDAnd θGRespectively indicate the parameter of model D and G.
Compared to the prior art the present invention, has the beneficial effect that and 1) represents with existing towards reconstruction categorical data
Aggressive mechanism it is different, the present invention can implement attack for specific user in federal study, and rebuild its private data, cause
Privacy compromise.Since previous mechanism can only reconstruct the characterization sample of some classification, without similar with any authentic specimen, because
And substantially without causing privacy compromise.2) present invention introduces a kind of multitasks to generate confrontation network model, and wherein arbiter is same
Shi Zhihang is to the differentiation task of the authenticity of sample, classification and owning user identity, and generator is for synthesizing user data.It improves
The authenticity of synthesis sample.3) present invention introduces a kind of class of subscriber method for computing data, so as to depict different use
The feature of user data collection, supervision confrontation network model can synthesize the data of specific user.4) compared to previous attack pattern
It is required that attacker can modify the structure of Share Model and learn to introduce negative effect to federation, the present invention is not influence often
Implement attack under the premise of advising study course, improves the concealment of attack.
Detailed description of the invention
Fig. 1 is that the data reconstruction based on malicious service end towards federation's study attacks frame diagram.
Fig. 2 is the model structure that multitask generates antagonism network.
Fig. 3 is the reconstruction effect picture of the invention towards MNIST data set.
Fig. 4 is the reconstruction effect picture of the invention towards AT&T data set.
Fig. 5 is present invention figure compared with the reconstruction effect of existing attack method.
Specific embodiment
User data Rebuilding Attack method towards the study of depth federation comprises the following steps:
1) malicious service end participates in conventional federal learning process, and user first reaches the target of Cooperative Study, model
Unanimously, then iteration executes: server-side issues Share Model, and user locally trains this model and model parameter is uploaded to clothes
Business end, server-side polymerize the update of these parameters again.Until model is restrained;
2) malicious service end is locally constructing a multitask generation confrontation network model, which includes a generation mould
A type G and discrimination model D, the differentiation that wherein D is performed simultaneously the authenticity to input sample, classification and owning user identity are appointed
Business, G is for synthesizing user data;
3) parameter that malicious service end is submitted according to user, which is more newly arrived, calculates the data representative of different user, user data generation
Table is used to supervise the sample owning user identity task training of D model;
4) malicious service end is more newly arrived using the parameter of target user and other users in every wheel iterative process and substitutes D
The training of the category task of model;
5) malicious service end utilizes local additional real data set, the false data collection of G synthesis, the sample of Lai Xunlian D model
This authenticity differentiates task.By the dual training of G and D, G is enabled to generate the private data of specific user.
A kind of user data Rebuilding Attack method towards the study of depth federation, malicious service end is considered as
" honest and curious ", this shows that attacker can go to execute conventional learning tasks according to the rule of federation's study.Specifically,
It is assumed that N number of user respectively possesses a private data collection, these data are not usually independent same distribution in actual scene.User
To the target of federation's study, after model structure reaches an agreement, distributed training is realized via malicious service end.Specifically,
The update of Share Model each round can be expressed as
MtIndicate that t takes turns updated Share Model,It indicates that parameter of the t wheel from user k updates, is existed by user k
Locally by private data to MtIt is calculated.
A kind of user data Rebuilding Attack method towards the study of depth federation, attacker establish one in server-side
A multitask generates confrontation network model, and model includes an a generation model G and discrimination model D.D is used to complete three
Task: 1) the true and false differentiation task of sample;2) classification task of sample;3) the owning user identity of sample differentiates task.It compares
Confrontation network model is generated in standard, the present invention extraly differentiates identity as condition entry to G model, it is therefore an objective to generate special
Determine the privacy sample of user.
A kind of user data Rebuilding Attack method towards the study of depth federation, the structure of D model can indicate
For
Dreal=Sigmoid (FCreal(Lshare))
Dcat=Sigmoid (FCcat(Lshare))
Did=Sigmoid (FCid(Lshare))
Dreal, Dcat, DidRespectively indicate three above-mentioned tasks, FCreal, FCcatAnd FCidIt is corresponding to respectively indicate three tasks
The full articulamentum of neural network, LshareIndicate the layer structure in addition to the last layer of Share Model.Sigmoid is activation primitive.
A kind of user data Rebuilding Attack method towards the study of depth federation, G model receive classification and user
Identity label and random noise synthesize the sample of specific user.The training that multitask generates confrontation network can be expressed as
X~pvictimWith x~potherThe sample sampled with victim user and other users is respectively indicated, CE indicates cross entropy
Loss function.Attacker can not directly acquire the private data of user, each use of approximate substitution of more newly arriving uploaded by user
The data at family represent.
A kind of user data Rebuilding Attack method towards the study of depth federation, attacker uploads according to user to join
The data of number building different user represent, data represent can line flag be: it is total as private data training when using data to represent
When enjoying model, parameter similar with straight reality private data can be generated and updated, be expressed as
Indicate Share Model MtParameter, γ be scale factor be used to balanceMagnitude, XkIndicate different user
Data represent.
A kind of user data Rebuilding Attack method towards the study of depth federation, in order to avoid the calculation based on optimization
Method introduces excessive noise in calculating user data representative, and local difference summation is added into as regular terms, is expressed as
xijIndicate pixel value of the image x at (i, j).The distance of the adjacent pixels of image is calculated, realizes and generates figure
Picture it is smooth
A kind of user data Rebuilding Attack method towards the study of depth federation, attacker is in each round to G, D
The training of model is expressed as
η1And η2Indicate learning rate, θDAnd θGRespectively indicate the parameter of model D and G.
Embodiment 1
1) malicious service end participates in conventional federal learning process, and user first reaches the target of Cooperative Study, model
Unanimously, then iteration executes: server-side issues Share Model, and user locally trains this model and model parameter is uploaded to clothes
Business end, server-side polymerize the update of these parameters again, until model is restrained.The update of each round can be expressed as
MtIndicate that t takes turns updated Share Model,It indicates that parameter of the t wheel from user k updates, is existed by user k
Locally by private data to MtIt is calculated.
2) malicious service end is locally constructing a multitask generation confrontation network model, which includes a generation mould
A type G and discrimination model D, the differentiation that wherein D is performed simultaneously the authenticity to input sample, classification and owning user identity are appointed
Business, the structure of model can be expressed as
Dreal=Sigmoid (FCreal(Lshare))
Dcat=Sigmoid (FCcat(Lshare))
Did=Sigmoid (FCid(Lshare))
Dreal, Dcat, DidRespectively indicate three above-mentioned tasks, FCreal, FCcatAnd FCidIt is corresponding to respectively indicate three tasks
The full articulamentum of neural network, LshareIndicate the layer structure in addition to the last layer of Share Model.Sigmoid is activation primitive.
3) parameter that malicious service end is submitted according to user, which is more newly arrived, calculates the data representative of different user, user data generation
Table is used to supervise the sample owning user identity task training of D model, is expressed as
Indicate Share Model MtParameter, γ be scale factor be used to balanceMagnitude, XkIndicate different user
Data represent.
4) malicious service end is more newly arrived using the parameter of target user and other users in every wheel iterative process and substitutes D
The training of the category task of model, is expressed as
Wherein MtIndicate that t takes turns updated Share Model,Indicate that parameter of the t wheel from user k updates.
5) malicious service end utilizes local additional real data set, the false data collection of G synthesis, the sample of Lai Xunlian D model
This authenticity differentiates task.By the dual training of G and D, G is enabled to generate the private data of specific user, each round is to G, D
The training of model is expressed as
η1And η2Indicate learning rate, θDAnd θGRespectively indicate the parameter of model D and G.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention
The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method
In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.
Claims (8)
1. a kind of user data Rebuilding Attack method towards the study of depth federation, which is characterized in that comprise the following steps:
Step 1, malicious service end participate in conventional federal learning process, and user first reaches the target of Cooperative Study, model
Unanimously, then iteration executes: server-side issues Share Model, and user locally trains this model and model parameter is uploaded to clothes
Business end, server-side polymerize the update of these parameters again;Until model is restrained;
Step 2, malicious service end are locally constructing a multitask generation confrontation network model, which includes a generation mould
A type G and discrimination model D, the differentiation that wherein D is performed simultaneously the authenticity to input sample, classification and owning user identity are appointed
Business, G is for synthesizing user data;
The parameter that step 3, malicious service end are submitted according to user, which is more newly arrived, calculates the data representative of different user, user data generation
Table is used to supervise the sample owning user identity task training of D model;
Step 4, malicious service end are more newly arrived using the parameter of target user and other users in every wheel iterative process and substitute D
The training of the category task of model;
Step 5, malicious service end utilize local additional real data set, the false data collection of G synthesis, the sample of Lai Xunlian D model
This authenticity differentiates task;By the dual training of G and D, G is enabled to generate the private data of specific user.
2. a kind of user data Rebuilding Attack method towards the study of depth federation as described in claim 1, which is characterized in that
Malicious service end is considered as " honest and curious ", this shows that attacker can go to execute routine according to the rule of federation's study
Learning tasks;Specifically, assuming that N number of user respectively possesses a private data collection, these data are usual in actual scene
It is not independent same distribution;After users reach an agreement to target, the model structure of federation's study, realized via malicious service end
Distribution training;Specifically, the update of Share Model each round can be expressed as
MtIndicate that t takes turns updated Share Model,Indicate that parameter of the t wheel from user k updates, by user k in local
By private data to MtIt is calculated.
3. a kind of user data Rebuilding Attack method towards the study of depth federation as described in claim 1, which is characterized in that
Attacker establishes a multitask in server-side and generates confrontation network model, and model includes a generation model G and a differentiation
Model D;D is used to complete three tasks: 1) the true and false differentiation task of sample;2) classification task of sample;3) the affiliated use of sample
Family identity differentiates task;Confrontation network model is generated compared to standard, the present invention, which extraly differentiates identity, is used as condition entry
To G model, it is therefore an objective to generate the privacy sample of specific user.
4. a kind of user data Rebuilding Attack method towards the study of depth federation as described in claim 1, which is characterized in that
The structure of D model can be expressed as
Dreal=Sigmoid (FCreal(Lshare))
Dcat=Sigmoid (FCcat(Lshare))
Did=Sigmoid (FCid(Lshare))
Dreal, Dcat, DidRespectively indicate three above-mentioned tasks, FCreal, FCcatAnd FCidRespectively indicate the corresponding mind of three tasks
Through the full articulamentum of network, LshareIndicate the layer structure in addition to the last layer of Share Model;Sigmoid is activation primitive.
5. a kind of user data Rebuilding Attack method towards the study of depth federation as described in claim 1, which is characterized in that
G model receives classification and user identity label and random noise, synthesizes the sample of specific user;Multitask generates confrontation net
The training of network can be expressed as
X~pvictimWith x~potherThe sample sampled with victim user and other users is respectively indicated, CE indicates to intersect entropy loss
Function;Attacker can not directly acquire the private data of user, approximate substitution of more the newly arriving each user uploaded by user
Data represent.
6. a kind of user data Rebuilding Attack method towards the study of depth federation as described in claim 1, which is characterized in that
Attacker represents according to the data that user's upload parameter constructs different user, data represent can line flag be: when with data generation
When table is as private data training Share Model, parameter similar with true private data can be generated and updated, be expressed as
Indicate Share Model MtParameter, γ be scale factor be used to balanceMagnitude, XkIndicate the data of different user
It represents.
7. a kind of user data Rebuilding Attack method towards the study of depth federation as described in claim 1, which is characterized in that
In order to avoid the algorithm based on optimization introduces excessive noise in calculating user data representative, local difference summation is as regular terms
It is added into, is expressed as
xijIndicate pixel value of the image x at (i, j);The distance of the adjacent pixels of image is calculated, realizes and generates the flat of image
It is sliding.
8. a kind of user data Rebuilding Attack method towards the study of depth federation as described in claim 1, attacker is every
To G, the training of D model is expressed as one wheel
η1And η2Indicate learning rate, θDAnd θGRespectively indicate the parameter of model D and G.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910249056.3A CN110008696A (en) | 2019-03-29 | 2019-03-29 | A kind of user data Rebuilding Attack method towards the study of depth federation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910249056.3A CN110008696A (en) | 2019-03-29 | 2019-03-29 | A kind of user data Rebuilding Attack method towards the study of depth federation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110008696A true CN110008696A (en) | 2019-07-12 |
Family
ID=67168881
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910249056.3A Pending CN110008696A (en) | 2019-03-29 | 2019-03-29 | A kind of user data Rebuilding Attack method towards the study of depth federation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110008696A (en) |
Cited By (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110378749A (en) * | 2019-07-25 | 2019-10-25 | 深圳前海微众银行股份有限公司 | Appraisal procedure, device, terminal device and the storage medium of user data similitude |
CN110443378A (en) * | 2019-08-02 | 2019-11-12 | 深圳前海微众银行股份有限公司 | Feature correlation analysis method, device and readable storage medium storing program for executing in federation's study |
CN110443375A (en) * | 2019-08-16 | 2019-11-12 | 深圳前海微众银行股份有限公司 | A kind of federation's learning method and device |
CN110472745A (en) * | 2019-08-06 | 2019-11-19 | 深圳前海微众银行股份有限公司 | Information transferring method and device in a kind of federal study |
CN110569663A (en) * | 2019-08-15 | 2019-12-13 | 深圳市莱法照明通信科技有限公司 | Method, device, system and storage medium for educational data sharing |
CN110569911A (en) * | 2019-09-11 | 2019-12-13 | 深圳绿米联创科技有限公司 | Image recognition method, device, system, electronic equipment and storage medium |
CN110647765A (en) * | 2019-09-19 | 2020-01-03 | 济南大学 | Privacy protection method and system based on knowledge migration under collaborative learning framework |
CN110751291A (en) * | 2019-10-29 | 2020-02-04 | 支付宝(杭州)信息技术有限公司 | Method and device for realizing multi-party combined training neural network of security defense |
CN110942154A (en) * | 2019-11-22 | 2020-03-31 | 深圳前海微众银行股份有限公司 | Data processing method, device, equipment and storage medium based on federal learning |
CN111079946A (en) * | 2019-12-20 | 2020-04-28 | 支付宝(杭州)信息技术有限公司 | Model training method, member detection device training method and system |
CN111079977A (en) * | 2019-11-18 | 2020-04-28 | 中国矿业大学 | Heterogeneous federated learning mine electromagnetic radiation trend tracking method based on SVD algorithm |
CN111091199A (en) * | 2019-12-20 | 2020-05-01 | 哈尔滨工业大学(深圳) | Federal learning method and device based on differential privacy and storage medium |
CN111245903A (en) * | 2019-12-31 | 2020-06-05 | 烽火通信科技股份有限公司 | Joint learning method and system based on edge calculation |
CN111260061A (en) * | 2020-03-09 | 2020-06-09 | 厦门大学 | Differential noise adding method and system in federated learning gradient exchange |
CN111340614A (en) * | 2020-02-28 | 2020-06-26 | 深圳前海微众银行股份有限公司 | Sample sampling method and device based on federal learning and readable storage medium |
CN111447083A (en) * | 2020-03-10 | 2020-07-24 | 中国人民解放军国防科技大学 | Federal learning framework under dynamic bandwidth and unreliable network and compression algorithm thereof |
CN111445031A (en) * | 2020-03-31 | 2020-07-24 | 深圳前海微众银行股份有限公司 | Attack coping method and federal learning device |
CN111460443A (en) * | 2020-05-28 | 2020-07-28 | 南京大学 | Security defense method for data manipulation attack in federated learning |
CN111464568A (en) * | 2020-06-17 | 2020-07-28 | 广东电网有限责任公司佛山供电局 | Method and system for enhancing network attack prevention capability of multiple network ports |
CN111581648A (en) * | 2020-04-06 | 2020-08-25 | 电子科技大学 | Method of federal learning to preserve privacy in irregular users |
CN111598143A (en) * | 2020-04-27 | 2020-08-28 | 浙江工业大学 | Credit evaluation-based defense method for federal learning poisoning attack |
CN111985562A (en) * | 2020-08-20 | 2020-11-24 | 复旦大学 | End cloud collaborative training system for protecting end-side privacy |
CN112039702A (en) * | 2020-08-31 | 2020-12-04 | 中诚信征信有限公司 | Model parameter training method and device based on federal learning and mutual learning |
CN112100659A (en) * | 2020-09-14 | 2020-12-18 | 电子科技大学 | Block chain federal learning system and Byzantine attack detection method |
CN112101403A (en) * | 2020-07-24 | 2020-12-18 | 西安电子科技大学 | Method and system for classification based on federate sample network model and electronic equipment |
CN112118099A (en) * | 2020-09-16 | 2020-12-22 | 西安电子科技大学 | Distributed multi-task learning privacy protection method and system for resisting inference attack |
CN112162959A (en) * | 2020-10-15 | 2021-01-01 | 深圳技术大学 | Medical data sharing method and device |
CN112203282A (en) * | 2020-08-28 | 2021-01-08 | 中国科学院信息工程研究所 | 5G Internet of things intrusion detection method and system based on federal transfer learning |
CN112214342A (en) * | 2020-09-14 | 2021-01-12 | 德清阿尔法创新研究院 | Efficient error data detection method in federated learning scene |
CN112257063A (en) * | 2020-10-19 | 2021-01-22 | 上海交通大学 | Cooperative game theory-based detection method for backdoor attacks in federal learning |
CN112329009A (en) * | 2020-10-12 | 2021-02-05 | 南京理工大学 | Defense method for noise attack in joint learning |
CN112434758A (en) * | 2020-12-17 | 2021-03-02 | 浙江工业大学 | Cluster-based federal learning casual vehicle attack defense method |
WO2021036014A1 (en) * | 2019-08-28 | 2021-03-04 | 深圳前海微众银行股份有限公司 | Federated learning credit management method, apparatus and device, and readable storage medium |
CN112784990A (en) * | 2021-01-22 | 2021-05-11 | 支付宝(杭州)信息技术有限公司 | Training method of member inference model |
WO2021090142A1 (en) * | 2019-11-05 | 2021-05-14 | International Business Machines Corporation | Intelligent agent to simulate customer data |
CN112819180A (en) * | 2021-01-26 | 2021-05-18 | 华中科技大学 | Multi-service data generation method and device based on federal generation model |
CN112949670A (en) * | 2019-12-10 | 2021-06-11 | 京东数字科技控股有限公司 | Data set switching method and device for federal learning model |
CN113051608A (en) * | 2021-03-11 | 2021-06-29 | 佳讯飞鸿(北京)智能科技研究院有限公司 | Method for transmitting virtualized sharing model for federated learning |
WO2021142627A1 (en) * | 2020-01-14 | 2021-07-22 | Oppo广东移动通信有限公司 | Resource scheduling method and apparatus, and readable storage medium |
CN113159332A (en) * | 2020-01-23 | 2021-07-23 | 华为技术有限公司 | Method and device for realizing model updating |
CN113239351A (en) * | 2020-12-08 | 2021-08-10 | 武汉大学 | Novel data pollution attack defense method for Internet of things system |
CN113297573A (en) * | 2021-06-11 | 2021-08-24 | 浙江工业大学 | Vertical federal learning defense method and device based on GAN simulation data generation |
TWI745958B (en) * | 2019-11-19 | 2021-11-11 | 大陸商支付寶(杭州)信息技術有限公司 | Training method and device of neural network model for protecting privacy and safety |
CN113934578A (en) * | 2021-10-28 | 2022-01-14 | 电子科技大学 | Method for data recovery attack in federated learning scene |
WO2022033579A1 (en) * | 2020-08-13 | 2022-02-17 | 华为技术有限公司 | Federated learning method, device and system |
US11461728B2 (en) | 2019-11-05 | 2022-10-04 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for consortium sharing |
US11461793B2 (en) | 2019-11-05 | 2022-10-04 | International Business Machines Corporation | Identification of behavioral pattern of simulated transaction data |
US11475467B2 (en) | 2019-11-05 | 2022-10-18 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for realistic modeling |
US11475468B2 (en) | 2019-11-05 | 2022-10-18 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for detection model sharing across entities |
US11488185B2 (en) | 2019-11-05 | 2022-11-01 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for consortium sharing |
US11488172B2 (en) | 2019-11-05 | 2022-11-01 | International Business Machines Corporation | Intelligent agent to simulate financial transactions |
US11494835B2 (en) | 2019-11-05 | 2022-11-08 | International Business Machines Corporation | Intelligent agent to simulate financial transactions |
CN115438753A (en) * | 2022-11-03 | 2022-12-06 | 电子科技大学 | Method for measuring security of federal learning protocol data based on generation |
CN115600250A (en) * | 2022-12-12 | 2023-01-13 | 阿里巴巴(中国)有限公司(Cn) | Data processing method, storage medium and electronic device |
US11556734B2 (en) | 2019-11-05 | 2023-01-17 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for realistic modeling |
CN115719085A (en) * | 2023-01-10 | 2023-02-28 | 武汉大学 | Deep neural network model inversion attack defense method and equipment |
US11599884B2 (en) | 2019-11-05 | 2023-03-07 | International Business Machines Corporation | Identification of behavioral pattern of simulated transaction data |
TWI800303B (en) * | 2022-03-16 | 2023-04-21 | 英業達股份有限公司 | Fedrated learning method using synonym |
WO2023097602A1 (en) * | 2021-12-02 | 2023-06-08 | 东莞理工学院 | Inference method and apparatus for cooperative training data attribute, and device and storage medium |
US11676218B2 (en) | 2019-11-05 | 2023-06-13 | International Business Machines Corporation | Intelligent agent to simulate customer data |
US11842357B2 (en) | 2019-11-05 | 2023-12-12 | International Business Machines Corporation | Intelligent agent to simulate customer data |
CN112214342B (en) * | 2020-09-14 | 2024-05-24 | 德清阿尔法创新研究院 | Efficient error data detection method in federal learning scene |
-
2019
- 2019-03-29 CN CN201910249056.3A patent/CN110008696A/en active Pending
Cited By (92)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110378749B (en) * | 2019-07-25 | 2023-09-26 | 深圳前海微众银行股份有限公司 | Client similarity evaluation method and device, terminal equipment and storage medium |
CN110378749A (en) * | 2019-07-25 | 2019-10-25 | 深圳前海微众银行股份有限公司 | Appraisal procedure, device, terminal device and the storage medium of user data similitude |
CN110443378A (en) * | 2019-08-02 | 2019-11-12 | 深圳前海微众银行股份有限公司 | Feature correlation analysis method, device and readable storage medium storing program for executing in federation's study |
CN110443378B (en) * | 2019-08-02 | 2023-11-03 | 深圳前海微众银行股份有限公司 | Feature correlation analysis method and device in federal learning and readable storage medium |
CN110472745A (en) * | 2019-08-06 | 2019-11-19 | 深圳前海微众银行股份有限公司 | Information transferring method and device in a kind of federal study |
CN110569663A (en) * | 2019-08-15 | 2019-12-13 | 深圳市莱法照明通信科技有限公司 | Method, device, system and storage medium for educational data sharing |
CN110443375A (en) * | 2019-08-16 | 2019-11-12 | 深圳前海微众银行股份有限公司 | A kind of federation's learning method and device |
WO2021036014A1 (en) * | 2019-08-28 | 2021-03-04 | 深圳前海微众银行股份有限公司 | Federated learning credit management method, apparatus and device, and readable storage medium |
CN110569911A (en) * | 2019-09-11 | 2019-12-13 | 深圳绿米联创科技有限公司 | Image recognition method, device, system, electronic equipment and storage medium |
CN110647765A (en) * | 2019-09-19 | 2020-01-03 | 济南大学 | Privacy protection method and system based on knowledge migration under collaborative learning framework |
WO2021082633A1 (en) * | 2019-10-29 | 2021-05-06 | 支付宝(杭州)信息技术有限公司 | Multi-party joint neural network training method and apparatus for achieving security defense |
CN110751291A (en) * | 2019-10-29 | 2020-02-04 | 支付宝(杭州)信息技术有限公司 | Method and device for realizing multi-party combined training neural network of security defense |
US11461728B2 (en) | 2019-11-05 | 2022-10-04 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for consortium sharing |
US11488172B2 (en) | 2019-11-05 | 2022-11-01 | International Business Machines Corporation | Intelligent agent to simulate financial transactions |
US11676218B2 (en) | 2019-11-05 | 2023-06-13 | International Business Machines Corporation | Intelligent agent to simulate customer data |
US11599884B2 (en) | 2019-11-05 | 2023-03-07 | International Business Machines Corporation | Identification of behavioral pattern of simulated transaction data |
US11461793B2 (en) | 2019-11-05 | 2022-10-04 | International Business Machines Corporation | Identification of behavioral pattern of simulated transaction data |
US11475467B2 (en) | 2019-11-05 | 2022-10-18 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for realistic modeling |
US11475468B2 (en) | 2019-11-05 | 2022-10-18 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for detection model sharing across entities |
US11556734B2 (en) | 2019-11-05 | 2023-01-17 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for realistic modeling |
US11488185B2 (en) | 2019-11-05 | 2022-11-01 | International Business Machines Corporation | System and method for unsupervised abstraction of sensitive data for consortium sharing |
GB2605054A (en) * | 2019-11-05 | 2022-09-21 | Ibm | Intelligent agent to simulate customer data |
WO2021090142A1 (en) * | 2019-11-05 | 2021-05-14 | International Business Machines Corporation | Intelligent agent to simulate customer data |
US11842357B2 (en) | 2019-11-05 | 2023-12-12 | International Business Machines Corporation | Intelligent agent to simulate customer data |
US11494835B2 (en) | 2019-11-05 | 2022-11-08 | International Business Machines Corporation | Intelligent agent to simulate financial transactions |
CN111079977A (en) * | 2019-11-18 | 2020-04-28 | 中国矿业大学 | Heterogeneous federated learning mine electromagnetic radiation trend tracking method based on SVD algorithm |
TWI745958B (en) * | 2019-11-19 | 2021-11-11 | 大陸商支付寶(杭州)信息技術有限公司 | Training method and device of neural network model for protecting privacy and safety |
CN110942154B (en) * | 2019-11-22 | 2021-07-06 | 深圳前海微众银行股份有限公司 | Data processing method, device, equipment and storage medium based on federal learning |
CN110942154A (en) * | 2019-11-22 | 2020-03-31 | 深圳前海微众银行股份有限公司 | Data processing method, device, equipment and storage medium based on federal learning |
CN112949670A (en) * | 2019-12-10 | 2021-06-11 | 京东数字科技控股有限公司 | Data set switching method and device for federal learning model |
CN111079946A (en) * | 2019-12-20 | 2020-04-28 | 支付宝(杭州)信息技术有限公司 | Model training method, member detection device training method and system |
WO2021120854A1 (en) * | 2019-12-20 | 2021-06-24 | 支付宝(杭州)信息技术有限公司 | Model training method, and method and system for training member detection device |
CN111091199A (en) * | 2019-12-20 | 2020-05-01 | 哈尔滨工业大学(深圳) | Federal learning method and device based on differential privacy and storage medium |
CN111245903B (en) * | 2019-12-31 | 2022-07-01 | 烽火通信科技股份有限公司 | Joint learning method and system based on edge calculation |
CN111245903A (en) * | 2019-12-31 | 2020-06-05 | 烽火通信科技股份有限公司 | Joint learning method and system based on edge calculation |
WO2021142627A1 (en) * | 2020-01-14 | 2021-07-22 | Oppo广东移动通信有限公司 | Resource scheduling method and apparatus, and readable storage medium |
WO2021147373A1 (en) * | 2020-01-23 | 2021-07-29 | 华为技术有限公司 | Method and device for implementing model update |
CN113159332A (en) * | 2020-01-23 | 2021-07-23 | 华为技术有限公司 | Method and device for realizing model updating |
CN113159332B (en) * | 2020-01-23 | 2024-01-30 | 华为技术有限公司 | Method and equipment for realizing model update |
CN111340614A (en) * | 2020-02-28 | 2020-06-26 | 深圳前海微众银行股份有限公司 | Sample sampling method and device based on federal learning and readable storage medium |
CN111340614B (en) * | 2020-02-28 | 2021-05-18 | 深圳前海微众银行股份有限公司 | Sample sampling method and device based on federal learning and readable storage medium |
CN111260061A (en) * | 2020-03-09 | 2020-06-09 | 厦门大学 | Differential noise adding method and system in federated learning gradient exchange |
CN111260061B (en) * | 2020-03-09 | 2022-07-19 | 厦门大学 | Differential noise adding method and system in federated learning gradient exchange |
CN111447083B (en) * | 2020-03-10 | 2022-10-21 | 中国人民解放军国防科技大学 | Federal learning framework under dynamic bandwidth and unreliable network and compression algorithm thereof |
CN111447083A (en) * | 2020-03-10 | 2020-07-24 | 中国人民解放军国防科技大学 | Federal learning framework under dynamic bandwidth and unreliable network and compression algorithm thereof |
WO2021196701A1 (en) * | 2020-03-31 | 2021-10-07 | 深圳前海微众银行股份有限公司 | Attack coping method and federated learning device |
CN111445031B (en) * | 2020-03-31 | 2021-07-27 | 深圳前海微众银行股份有限公司 | Attack coping method and federal learning device |
CN111445031A (en) * | 2020-03-31 | 2020-07-24 | 深圳前海微众银行股份有限公司 | Attack coping method and federal learning device |
CN111581648B (en) * | 2020-04-06 | 2022-06-03 | 电子科技大学 | Method of federal learning to preserve privacy in irregular users |
CN111581648A (en) * | 2020-04-06 | 2020-08-25 | 电子科技大学 | Method of federal learning to preserve privacy in irregular users |
CN111598143A (en) * | 2020-04-27 | 2020-08-28 | 浙江工业大学 | Credit evaluation-based defense method for federal learning poisoning attack |
CN111598143B (en) * | 2020-04-27 | 2023-04-07 | 浙江工业大学 | Credit evaluation-based defense method for federal learning poisoning attack |
CN111460443B (en) * | 2020-05-28 | 2022-09-23 | 南京大学 | Security defense method for data manipulation attack in federated learning |
CN111460443A (en) * | 2020-05-28 | 2020-07-28 | 南京大学 | Security defense method for data manipulation attack in federated learning |
CN111464568A (en) * | 2020-06-17 | 2020-07-28 | 广东电网有限责任公司佛山供电局 | Method and system for enhancing network attack prevention capability of multiple network ports |
CN112101403B (en) * | 2020-07-24 | 2023-12-15 | 西安电子科技大学 | Classification method and system based on federal few-sample network model and electronic equipment |
CN112101403A (en) * | 2020-07-24 | 2020-12-18 | 西安电子科技大学 | Method and system for classification based on federate sample network model and electronic equipment |
WO2022033579A1 (en) * | 2020-08-13 | 2022-02-17 | 华为技术有限公司 | Federated learning method, device and system |
CN111985562A (en) * | 2020-08-20 | 2020-11-24 | 复旦大学 | End cloud collaborative training system for protecting end-side privacy |
CN111985562B (en) * | 2020-08-20 | 2022-07-26 | 复旦大学 | End cloud collaborative training system for protecting end-side privacy |
CN112203282B (en) * | 2020-08-28 | 2022-02-18 | 中国科学院信息工程研究所 | 5G Internet of things intrusion detection method and system based on federal transfer learning |
CN112203282A (en) * | 2020-08-28 | 2021-01-08 | 中国科学院信息工程研究所 | 5G Internet of things intrusion detection method and system based on federal transfer learning |
CN112039702A (en) * | 2020-08-31 | 2020-12-04 | 中诚信征信有限公司 | Model parameter training method and device based on federal learning and mutual learning |
CN112039702B (en) * | 2020-08-31 | 2022-04-12 | 中诚信征信有限公司 | Model parameter training method and device based on federal learning and mutual learning |
CN112100659A (en) * | 2020-09-14 | 2020-12-18 | 电子科技大学 | Block chain federal learning system and Byzantine attack detection method |
CN112214342A (en) * | 2020-09-14 | 2021-01-12 | 德清阿尔法创新研究院 | Efficient error data detection method in federated learning scene |
CN112214342B (en) * | 2020-09-14 | 2024-05-24 | 德清阿尔法创新研究院 | Efficient error data detection method in federal learning scene |
CN112100659B (en) * | 2020-09-14 | 2023-04-07 | 电子科技大学 | Block chain federal learning system and Byzantine attack detection method |
CN112118099B (en) * | 2020-09-16 | 2021-10-08 | 西安电子科技大学 | Distributed multi-task learning privacy protection method and system for resisting inference attack |
CN112118099A (en) * | 2020-09-16 | 2020-12-22 | 西安电子科技大学 | Distributed multi-task learning privacy protection method and system for resisting inference attack |
CN112329009B (en) * | 2020-10-12 | 2022-12-06 | 南京理工大学 | Defense method for noise attack in joint learning |
CN112329009A (en) * | 2020-10-12 | 2021-02-05 | 南京理工大学 | Defense method for noise attack in joint learning |
CN112162959A (en) * | 2020-10-15 | 2021-01-01 | 深圳技术大学 | Medical data sharing method and device |
CN112162959B (en) * | 2020-10-15 | 2023-10-10 | 深圳技术大学 | Medical data sharing method and device |
CN112257063A (en) * | 2020-10-19 | 2021-01-22 | 上海交通大学 | Cooperative game theory-based detection method for backdoor attacks in federal learning |
CN112257063B (en) * | 2020-10-19 | 2022-09-02 | 上海交通大学 | Cooperative game theory-based detection method for backdoor attacks in federal learning |
CN113239351A (en) * | 2020-12-08 | 2021-08-10 | 武汉大学 | Novel data pollution attack defense method for Internet of things system |
CN113239351B (en) * | 2020-12-08 | 2022-05-13 | 武汉大学 | Novel data pollution attack defense method for Internet of things system |
CN112434758B (en) * | 2020-12-17 | 2024-02-13 | 浙江工业大学 | Clustering-based federal learning pick-up car attack defense method |
CN112434758A (en) * | 2020-12-17 | 2021-03-02 | 浙江工业大学 | Cluster-based federal learning casual vehicle attack defense method |
CN112784990A (en) * | 2021-01-22 | 2021-05-11 | 支付宝(杭州)信息技术有限公司 | Training method of member inference model |
CN112819180A (en) * | 2021-01-26 | 2021-05-18 | 华中科技大学 | Multi-service data generation method and device based on federal generation model |
CN113051608A (en) * | 2021-03-11 | 2021-06-29 | 佳讯飞鸿(北京)智能科技研究院有限公司 | Method for transmitting virtualized sharing model for federated learning |
CN113297573A (en) * | 2021-06-11 | 2021-08-24 | 浙江工业大学 | Vertical federal learning defense method and device based on GAN simulation data generation |
CN113934578A (en) * | 2021-10-28 | 2022-01-14 | 电子科技大学 | Method for data recovery attack in federated learning scene |
WO2023097602A1 (en) * | 2021-12-02 | 2023-06-08 | 东莞理工学院 | Inference method and apparatus for cooperative training data attribute, and device and storage medium |
TWI800303B (en) * | 2022-03-16 | 2023-04-21 | 英業達股份有限公司 | Fedrated learning method using synonym |
CN115438753B (en) * | 2022-11-03 | 2023-01-06 | 电子科技大学 | Method for measuring security of federal learning protocol data based on generation |
CN115438753A (en) * | 2022-11-03 | 2022-12-06 | 电子科技大学 | Method for measuring security of federal learning protocol data based on generation |
CN115600250B (en) * | 2022-12-12 | 2023-03-21 | 阿里巴巴(中国)有限公司 | Data processing method, storage medium and electronic device |
CN115600250A (en) * | 2022-12-12 | 2023-01-13 | 阿里巴巴(中国)有限公司(Cn) | Data processing method, storage medium and electronic device |
CN115719085A (en) * | 2023-01-10 | 2023-02-28 | 武汉大学 | Deep neural network model inversion attack defense method and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110008696A (en) | A kind of user data Rebuilding Attack method towards the study of depth federation | |
CN109815893B (en) | Color face image illumination domain normalization method based on cyclic generation countermeasure network | |
Ren et al. | Low-light image enhancement via a deep hybrid network | |
CN111145116B (en) | Sea surface rainy day image sample augmentation method based on generation of countermeasure network | |
CN109859288B (en) | Image coloring method and device based on generation countermeasure network | |
CN110460600A (en) | The combined depth learning method generated to network attacks can be resisted | |
CN110458765B (en) | Image quality enhancement method based on perception preserving convolution network | |
CN110097178A (en) | It is a kind of paid attention to based on entropy neural network model compression and accelerated method | |
CN111625820A (en) | Federal defense method based on AIoT-oriented security | |
CN110660020B (en) | Image super-resolution method of antagonism generation network based on fusion mutual information | |
CN109064422A (en) | A kind of underwater image restoration method based on fusion confrontation network | |
Hsu et al. | A high-capacity QRD-based blind color image watermarking algorithm incorporated with AI technologies | |
CN114997420B (en) | Federal learning system and method based on segmentation learning and differential privacy fusion | |
CN108460720A (en) | A method of changing image style based on confrontation network model is generated | |
CN115481431A (en) | Dual-disturbance-based privacy protection method for federated learning counterreasoning attack | |
CN114362948B (en) | Federated derived feature logistic regression modeling method | |
CN113724149A (en) | Weak supervision visible light remote sensing image thin cloud removing method | |
Geng et al. | Improved gradient inversion attacks and defenses in federated learning | |
CN108492275B (en) | No-reference stereo image quality evaluation method based on deep neural network | |
Wang et al. | Data hiding during image processing using capsule networks | |
CN114639174A (en) | Privacy type deep forgery detection method under federal cooperation | |
CN115329388B (en) | Privacy enhancement method for federally generated countermeasure network | |
CN115510472B (en) | Multi-difference privacy protection method and system for cloud edge aggregation system | |
CN116187469A (en) | Client member reasoning attack method based on federal distillation learning framework | |
CN116011597A (en) | Personalized federal learning method and device based on graph data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |