CN116310642A - Variable dynamic discriminator differential privacy data generator based on PATE framework - Google Patents

Variable dynamic discriminator differential privacy data generator based on PATE framework Download PDF

Info

Publication number
CN116310642A
CN116310642A CN202310263012.2A CN202310263012A CN116310642A CN 116310642 A CN116310642 A CN 116310642A CN 202310263012 A CN202310263012 A CN 202310263012A CN 116310642 A CN116310642 A CN 116310642A
Authority
CN
China
Prior art keywords
discriminator
teacher
generator
data
differential privacy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310263012.2A
Other languages
Chinese (zh)
Inventor
于娟
张天汉
韩建民
姚鑫
彭浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN202310263012.2A priority Critical patent/CN116310642A/en
Publication of CN116310642A publication Critical patent/CN116310642A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Bioethics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a variable dynamic discriminator differential privacy data generator based on PATE frame, comprising: 1) Acquiring a real data set, and acquiring a plurality of data subsets based on the real data set; 2) Constructing a student generator and a plurality of teacher discriminators based on the PATE framework; 3) The generator generates a batch of data, and the data subsets in the 1) are sent to a plurality of teacher discriminators; 4) Training a plurality of teacher discriminators and updating gradient vectors; 5) Aggregating the gradient vectors, and updating the student generator to obtain an updated student generator; 6) Circularly executing 3) -5) until convergence based on the updated student generator to obtain a trained teacher discriminator; 7) And obtaining the instruction information based on the trained teacher discriminator, and outputting a final result. The invention prevents privacy leakage problem of the model in the training process through the PATE mechanism, and ensures that the generated data of the model has better use effect on the basis.

Description

Variable dynamic discriminator differential privacy data generator based on PATE framework
Technical Field
The invention relates to the technical field of image data generation, in particular to a variable dynamic discriminator differential privacy data generator based on a PATE framework.
Background
The development of machine learning is a requirement of relying on a large number of data sets, however, recent researches have found that the training of sensitive data for generating an antagonistic network (GAN) also causes privacy problems, and particularly the model itself also causes leakage of the private data.
With the research on the GAN structure in recent years, students begin to try to combine GAN with differential privacy, train GAN under the condition of differential privacy, and can generate synthetic data with high utility and meeting the characteristic of differential privacy in an unlimited amount, and simultaneously can alleviate the problem of lack of high-quality data in partial scenes. The most classical method is DP-SGD, which protects privacy problems by adding Differential Privacy (DP) on gradients, but the addition of differential privacy results in greatly reduced data availability, and PATE framework is one of DP mechanisms, which guarantees the data availability to a certain extent, and adopts a plurality of discriminators as a teacher model, and votes are performed by using gradient vectors to ensure privacy.
However, the multi-teacher discriminator architecture of the PATE framework creates new problems of training instability and overfitting. Therefore, the invention provides a high-quality data generation method DDG-PATE which meets the differential privacy based on the PATE structure.
Disclosure of Invention
In order to solve the problem that the privacy protection method for generating the countermeasure model by the differential privacy pair in the prior art can cause the usability of the generated data of the model to be reduced, the invention provides a differential privacy data generator of a variable dynamic discriminator based on a PATE framework.
In order to achieve the technical purpose, the invention provides the following technical scheme: a path framework-based variable dynamic discriminator differential privacy data generator comprising:
step 1: acquiring a real data set, and acquiring a plurality of data subsets based on the real data set;
step 2: constructing a student generator and a plurality of teacher discriminators based on the PATE framework, wherein one teacher discriminator corresponds to one data subset;
step 3: generating data based on the student generator, wherein the generated data are respectively combined with a plurality of data subsets to obtain a plurality of training sets;
step 4: inputting the training set into a corresponding teacher discriminator to train, and updating gradient vectors;
step 5: aggregating the gradient vectors to obtain an aggregate gradient vector, updating the student generator based on the aggregate gradient vector, and obtaining an updated student generator;
step 6: based on the updated student generator, circularly executing the steps 3-5 until the teacher discriminator converges to obtain a trained teacher discriminator;
step 7: and obtaining guide information based on the trained teacher discriminator, and outputting a final result by the updated student generator based on the guide information.
Optionally, the plurality of data subsets comprises a plurality of independent co-distributed subsets.
Optionally, the training the teacher discriminator includes:
inputting the training set into a corresponding teacher discriminator, and obtaining corresponding discrimination results by a plurality of teacher discriminators;
obtaining a loss value of each teacher discriminator based on the discrimination result;
obtaining loss weights of a plurality of teacher discriminators based on the loss value of each teacher discriminator;
updating the gradient vector based on the loss weight.
Optionally, the capability dynamic change process of the teacher discriminator includes:
the initial capacity of the teacher discriminator is 1/2 of the capacity of the original teacher discriminator;
when softmax (ε/L) D ) When the capacity of the teacher discriminator is less than or equal to 1/4 of the capacity of the original teacher discriminator;
when 1/4 < softmax (ε/L) D ) When the capacity of the teacher discriminator is less than or equal to 1/2, the capacity of the teacher discriminator is 1/2 of the capacity of the original teacher discriminator;
when 1/2 < softmax (ε/L) D ) When the capability of the teacher discriminator is equal to the capability of the original teacher discriminator;
where ε represents the privacy budget, L, of the PATE structure D Representing the cumulative loss value in the teacher discriminator training,
Figure BDA0004131994930000031
D i (i=1, 2..k) represents the i-th teacher discriminator, L Di Representing the loss value of the ith teacher discriminator.
Optionally, gradient vector aggregation is performed based on a differential privacy mechanism, wherein privacy protection in the differential privacy mechanism includes RDP.
Optionally, the process of aggregating the gradient vectors to obtain an aggregate gradient vector includes:
performing gradient vector discretization on a plurality of gradient vectors to obtain a gradient histogram;
obtaining a voting result of each teacher discriminator based on the gradient histogram;
adding Laplace noise to the voting result of the teacher discriminator, and calculating a final result based on a Confident-GNMax aggregator.
Optionally, adding Laplace noise to the voting result of the teacher discriminator, obtaining a noisy maximum vote based on a Confident-GNMax aggregator, and obtaining the final result based on the noisy maximum vote;
when the maximum voting with noise is larger than a given threshold value, the final result is the maximum average value of voting results of the teacher discriminators and noise respectively;
and when the maximum vote with noise is smaller than or equal to a given threshold value, the final result is the maximum vote with noise.
The invention has the following technical effects:
1. the privacy protection problem is realized based on the PATE structure, and the dynamic variable discriminator and the loss weight are added, so that the problem that the usability of data generation is poor due to the differential privacy method for guaranteeing privacy addition in the existing data generation method is solved.
2. The invention improves the convergence capacity training of the model to be more stable, and solves the problem of model running caused by over fitting and under fitting of the model to a certain extent.
3. The invention considers that because the real data is divided into a plurality of subsets, although the real data are independent and distributed at the same time, slight differences exist among individuals, and a teacher discriminator easily affects a student generator during training so as to adopt a teacher loss weight.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an overall frame diagram of a system in an embodiment of the invention;
FIG. 2 is a diagram of a dynamic discriminator in an embodiment of the invention;
FIG. 3 is a flow chart of an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The embodiment provides a variable dynamic discriminator differential privacy data generator based on a PATE framework, which firstly divides original data with privacy information into a plurality of independent same distribution subsets with the same size. Each subset corresponds to a teacher discriminator. The student module and the teacher module are updated by continuous training iterations. Each iteration is divided into the following steps:
step one: and randomly generating m batches of noise samples according to the sample information, and giving the noise samples to a student generator to generate a group of data.
Step two: each teacher discriminator D i Based on discriminator loss L Di Updating weights
Figure BDA0004131994930000051
To reduce discrimination loss of the discriminator for the real data and the synthesized data.
Step three: the loss of each teacher discriminator is calculated and the gradient is recorded, and this gradient information contains key information (i.e., the availability of the generated data) that instructs the student generator G how to better generate the valid data. When training data is limited, the disjoint subareas have slight distribution difference, which leads to different optimality of different teacher discriminator modules, so that loss weights are added before gradient aggregation, and the guidance information of part of teacher discriminators is controlled in a usable range through weight control.
Step four: in order to ensure that the teacher gradient information leaks privacy information in the process of guiding the student generator, gradient aggregation is carried out by adding an effective DP mechanism, RDP is mainly adopted as privacy guarantee, and a final aggregation gradient vector is transmitted to the student generator. The losses of the plurality of teacher discriminators are aggregated to control the capacity of the teacher discriminator for the next round of training results.
Step five: the student generator improves its generating ability according to back propagation, for which the loss function should be synthetic data, and
Figure BDA0004131994930000052
wherein z is i Is a noise sample, G (z i ) Is the generation data, and->
Figure BDA0004131994930000053
Is the teacher discriminator gradient vector DP aggregate plus synthetic data. Finally reduce L G (Z, X) and propagates the gradient of the added DP noise to the student generator.
Example two
The embodiment provides a variable dynamic discriminator differential privacy data generator based on a PATE framework, and the whole framework diagram of the system is shown in figure 1.
(1) The real data set is divided into n independent subsets which are distributed uniformly, then a student generator and a teacher discriminator are constructed, the number of the teacher discriminators is the same as that of the data subsets, one teacher discriminator can only access the data subsets belonging to the teacher discriminator according to the PATE mechanism in order to improve the privacy capability of the teacher discriminator, and the teacher discriminator can be added with a differential privacy mechanism during training.
(2) The student generator generates data and gives the data to the discriminator together with the data subset, the discriminator makes true and false judgment on the data according to the training knowledge of the current point, and the discriminator calculates loss and gradient for the discrimination result, namely, one loss for each teacher discriminator
Figure BDA0004131994930000061
Epsilon is the privacy budget, we can calculate +.>
Figure BDA0004131994930000062
Calculating to obtain the duty ratio of the losses of different teacher discriminators on the losses of all teacher discriminators, and finally calculating the weight
Figure BDA0004131994930000063
Then multiplying the calculation result by the weight according to the loss>
Figure BDA0004131994930000064
And aggregating the results of each teacher discriminator. The method comprises the following steps:
first, a gradient vector is discretized by creating a gradient histogram and mapping each element to the midpoint of the histogram (bin) to which it belongs, i.e., the gradient vector discretization. The teacher discriminator then no longer votes for the k bins associated with the k elements in its gradient vector. Finally, for each dimension we calculate the most voted bin using the Confident-GNMax aggregator. The aggregate gradient vector consists of the midpoints of the selected bins. The Confidant-GNMax aggregator first calculates the noisy maximum vote, if the noisy maximum vote is greater than a given threshold, then outputs the vote plus the maximum average of the noise, otherwise, outputs the noisy maximum vote.
(3) The student generator updates according to the aggregated gradient information, then continues to generate data to train the teacher discriminator, and repeats the above process until the final model converges and the student generator generates data meeting the requirements.
The variation of the variable dynamic discriminator shown in fig. 2. The upper part of fig. 2 shows the capability part of the common discriminator, while the lower part is the capability part weakened by the common discriminator, and the capability of the discriminator is lost with the discriminator obtained by the previous training during training
Figure BDA0004131994930000071
To determine whether to increase or decrease when softmax (ε/L) D ) At 1/4 or less, the overall discriminator is scaled down to 1/4 of the original size, when 1/4 < softmax (ε/L) D ) Three 1/2, the overall discriminator shrinks to 1/2 of the original size when 1/2 < softmax (ε/L) D ) While maintaining the original size, ε is the privacy budget of the PATE structure, our discriminator capacity was half that of the original discriminator at the beginning of training.
The implementation of differential privacy is guaranteed by the PATE mechanism, in which a private data set is first separated in a subset of data, a machine learning model is trained on each partition, called a teacher model, all teachers solve the same machine learning task, but they are trained independently. In order to guarantee the privacy of the model, in the PATE, noise is added when all teacher predictions are aggregated and form consensus. The number of teachers voting for each category is counted and then random noise of laplace or gaussian distribution is added to disturb the statistics. For example: if the data set is divided according to the requirements of the PATE, the personal diagnosis information only exists in one data subset, and a machine learning model is trained on each partition and is called a teacher model. And (3) a prediction result exists for all teacher models, and the voting result is directly listed in the histogram, so that a final result can be obtained. In addition, if most teacher output categories point to the same category, adding noise does not change the category to obtain the most votes, and the noise meets the requirement of differential privacy, which is defined as follows:
domain is
Figure BDA0004131994930000072
A random algorithm->
Figure BDA0004131994930000073
Is (epsilon, delta) -Differential Privacy if for all
Figure BDA0004131994930000074
And any two adjacent data sets D and D', then it is possible to obtain
Pr [ M (D) ∈S ] three exp (ε) Pr [ M (D')e S ] +δ
This definition means that epsilon-differential privacy can be broken by an attacker with the probability of delta.
Therefore, noise such as Laplace is introduced, and statistics of ticket numbers are disturbed, so that privacy is protected.
As shown in FIG. 3, the PATE itself satisfies the differential privacy due to the addition of the PATE mechanism, so that specific addition problems of the differential privacy do not need to be considered in the whole training, and the privacy guarantee is completed by the PATE mechanism and is continuously resisted to training until the model achieves the convergence effect.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. A variable dynamic discriminator differential privacy data generator based on the path framework, comprising:
step 1: acquiring a real data set, and acquiring a plurality of data subsets based on the real data set;
step 2: constructing a student generator and a plurality of teacher discriminators based on the PATE framework, wherein one teacher discriminator corresponds to one data subset;
step 3: generating data based on the student generator, wherein the generated data are respectively combined with a plurality of data subsets to obtain a plurality of training sets;
step 4: inputting the training set into a corresponding teacher discriminator to train, and updating gradient vectors;
step 5: aggregating the gradient vectors to obtain an aggregate gradient vector, updating the student generator based on the aggregate gradient vector, and obtaining an updated student generator;
step 6: based on the updated student generator, circularly executing the steps 3-5 until the teacher discriminator converges to obtain a trained teacher discriminator;
step 7: and obtaining guide information based on the trained teacher discriminator, and outputting a final result by the updated student generator based on the guide information.
2. The path framework-based variable dynamic discriminator differential privacy data generator of claim 1, wherein: the number of data subsets comprises a number of independent co-distributed subsets.
3. The path framework-based variable dynamic discriminator differential privacy data generator of claim 1, wherein: the training process for the teacher discriminator comprises the following steps:
inputting the training set into a corresponding teacher discriminator, and obtaining corresponding discrimination results by a plurality of teacher discriminators;
obtaining a loss value of each teacher discriminator based on the discrimination result;
obtaining loss weights of a plurality of teacher discriminators based on the loss value of each teacher discriminator;
updating the gradient vector based on the loss weight.
4. A path framework based variable dynamic discriminator differential privacy data generator according to claim 3, wherein: the dynamic change process of the capability of the teacher discriminator comprises the following steps:
the initial capacity of the teacher discriminator is 1/2 of the capacity of the original teacher discriminator;
when softmax (ε/L) D ) When the capacity of the teacher discriminator is less than or equal to 1/4 of the capacity of the original teacher discriminator;
when 1/4 < softmax (ε/L) D ) When the capacity of the teacher discriminator is less than or equal to 1/2, the capacity of the teacher discriminator is 1/2 of the capacity of the original teacher discriminator;
when 1/2 < softmax (ε/L) D ) When the capability of the teacher discriminator is equal to the capability of the original teacher discriminator;
wherein ε represents the PATE structurePrivacy budget, L D Representing the cumulative loss value in the teacher discriminator training,
Figure FDA0004131994920000021
D i (i=1, 2..k) represents the i-th teacher discriminator, L Di Representing the loss value of the ith teacher discriminator.
5. The path framework-based variable dynamic discriminator differential privacy data generator of claim 1, wherein: gradient vector aggregation is performed based on a differential privacy mechanism, wherein privacy protection in the differential privacy mechanism comprises RDP.
6. The path framework-based variable dynamic discriminator differential privacy data generator of claim 5, wherein: the process for polymerizing a plurality of gradient vectors to obtain an polymerized gradient vector comprises the following steps:
performing gradient vector discretization on a plurality of gradient vectors to obtain a gradient histogram;
obtaining a voting result of each teacher discriminator based on the gradient histogram;
adding Laplace noise to the voting result of the teacher discriminator, and calculating a final result based on a Confident-GNMax aggregator.
7. The path framework-based variable dynamic discriminator differential privacy data generator of claim 6, wherein:
adding Laplace noise into the voting result of the teacher discriminator, obtaining maximum voting with noise based on a Confident-GNMax aggregator, and obtaining the final result based on the maximum voting with noise;
when the maximum voting with noise is larger than a given threshold value, the final result is the maximum average value of voting results of the teacher discriminators and noise respectively;
and when the maximum vote with noise is smaller than or equal to a given threshold value, the final result is the maximum vote with noise.
CN202310263012.2A 2023-03-17 2023-03-17 Variable dynamic discriminator differential privacy data generator based on PATE framework Pending CN116310642A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310263012.2A CN116310642A (en) 2023-03-17 2023-03-17 Variable dynamic discriminator differential privacy data generator based on PATE framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310263012.2A CN116310642A (en) 2023-03-17 2023-03-17 Variable dynamic discriminator differential privacy data generator based on PATE framework

Publications (1)

Publication Number Publication Date
CN116310642A true CN116310642A (en) 2023-06-23

Family

ID=86795668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310263012.2A Pending CN116310642A (en) 2023-03-17 2023-03-17 Variable dynamic discriminator differential privacy data generator based on PATE framework

Country Status (1)

Country Link
CN (1) CN116310642A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702834A (en) * 2023-08-04 2023-09-05 深圳市智慧城市科技发展集团有限公司 Data generation method, data generation device, and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702834A (en) * 2023-08-04 2023-09-05 深圳市智慧城市科技发展集团有限公司 Data generation method, data generation device, and computer-readable storage medium
CN116702834B (en) * 2023-08-04 2023-11-03 深圳市智慧城市科技发展集团有限公司 Data generation method, data generation device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
Zheng et al. Spectrum interference-based two-level data augmentation method in deep learning for automatic modulation classification
Liang et al. A novel wind speed prediction strategy based on Bi-LSTM, MOOFADA and transfer learning for centralized control centers
CN111563841B (en) High-resolution image generation method based on generation countermeasure network
Utkin et al. A deep forest classifier with weights of class probability distribution subsets
CN111080513B (en) Attention mechanism-based human face image super-resolution method
CN111191709B (en) Continuous learning framework and continuous learning method of deep neural network
CN111182637A (en) Wireless network resource allocation method based on generation countermeasure reinforcement learning
CN105160249B (en) A kind of method for detecting virus based on improved Artificial neural network ensemble
CN113065974B (en) Link prediction method based on dynamic network representation learning
CN111737743A (en) Deep learning differential privacy protection method
CN108304877A (en) A kind of physical layer channel authentication method based on machine learning
CN109242223A (en) The quantum support vector machines of city Public Buildings Fire Risk is assessed and prediction technique
CN109165735A (en) Based on the method for generating confrontation network and adaptive ratio generation new samples
CN109255726A (en) A kind of ultra-short term wind power prediction method of Hybrid Intelligent Technology
CN116681144A (en) Federal learning model aggregation method based on dynamic self-adaptive knowledge distillation
Qi et al. Fedbkd: Heterogenous federated learning via bidirectional knowledge distillation for modulation classification in iot-edge system
CN116310642A (en) Variable dynamic discriminator differential privacy data generator based on PATE framework
CN114462683A (en) Cloud edge collaborative multi-residential area load prediction method based on federal learning
CN114707765A (en) Dynamic weighted aggregation-based federated learning load prediction method
Yang et al. Federated continual learning via knowledge fusion: A survey
Shariff et al. Artificial (or) fake human face generator using generative adversarial network (GAN) machine learning model
CN113744175A (en) Image generation method and system for generating countermeasure network based on bidirectional constraint
CN113627597A (en) Countermeasure sample generation method and system based on general disturbance
Emmenegger et al. Treatment effect estimation from observational network data using augmented inverse probability weighting and machine learning
CN111353525A (en) Modeling and missing value filling method for unbalanced incomplete data set

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination