CN113435583A - Countermeasure generation network model training method based on federal learning and related equipment thereof - Google Patents

Countermeasure generation network model training method based on federal learning and related equipment thereof Download PDF

Info

Publication number
CN113435583A
CN113435583A CN202110758657.4A CN202110758657A CN113435583A CN 113435583 A CN113435583 A CN 113435583A CN 202110758657 A CN202110758657 A CN 202110758657A CN 113435583 A CN113435583 A CN 113435583A
Authority
CN
China
Prior art keywords
gradient
discriminator
network model
noise
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110758657.4A
Other languages
Chinese (zh)
Other versions
CN113435583B (en
Inventor
李泽远
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110758657.4A priority Critical patent/CN113435583B/en
Publication of CN113435583A publication Critical patent/CN113435583A/en
Application granted granted Critical
Publication of CN113435583B publication Critical patent/CN113435583B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application belongs to the field of artificial intelligence, is applied to the field of intelligent security and protection, and relates to a countermeasure generation network model training method based on federal learning. The application also provides a confrontation generation network model training device based on the federal learning, computer equipment and a storage medium. In addition, the present application also relates to a blockchain technique, and the image training set can be stored in the blockchain. The method and the device can effectively protect the privacy of the local image data set.

Description

Countermeasure generation network model training method based on federal learning and related equipment thereof
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a method for training an confrontation generation network model based on federal learning and related equipment thereof.
Background
With the rapid development of artificial intelligence in the medical field, and particularly with the mature use of many in the medical image recognition field, medical informatization and biotechnology continue to develop, and the types and sizes of medical data are growing at an unprecedented rate. Since the medical image data is confidential data, and constraints of laws and regulations are added, multiple medical institutions want to realize data intercommunication, and the expansion of case dimensionality is impossible.
Through federal learning, the image data of each medical institution can not be subjected to local cooperation and decentralized neural network training, so that the data barrier is broken and the privacy data is protected. However, in the distributed model training, an attacker can acquire parameters from gradient updating to restore the gradient, and then medical image data can be obtained. At present, defense methods adopted by federal learning comprise modes of gradient cutting, noise adding, weight parameter encryption and the like, gradient privacy is protected to a certain extent, but protective measures are not taken for local data, and still an attacker breaks encryption weight parameters and a backstepping model risks obtaining a local medical data set.
Disclosure of Invention
The embodiment of the application aims to provide a method for training a confrontation generation network model based on federal learning and related equipment thereof, so as to solve the technical problems that local medical image data are easy to decrypt and obtain and the safety and privacy are not high in the related technology.
In order to solve the above technical problem, an embodiment of the present application provides a method for training a confrontation generation network model based on federal learning, which adopts the following technical solutions:
inputting random noise data into a generator of a countermeasure generation network model to obtain a first image generation set;
inputting the image training set and the first image generation set into a discriminator of the countermeasure generation network model for discrimination to obtain a discrimination result;
when the judgment result does not meet the preset condition, cutting the original gradient of the discriminator to obtain a cutting gradient, and determining a noise value of the added noise according to the cutting gradient;
adjusting the model parameters of the discriminator according to the cutting gradient and the noise value, and updating the model parameters of the generator according to the guidance of the discriminator after adjusting the model parameters;
inputting the noise corresponding to the noise value into the generator after the parameters are adjusted to obtain a second image generation set;
inputting an image verification set and the second image generation set into the discriminator, calculating a privacy loss value of the countermeasure generation network model, and if the privacy loss value is not in a preset range, iteratively updating the countermeasure generation network model until the privacy loss value falls in the preset range.
Further, the step of inputting the image training set and the first image generation set into a discriminator of the confrontation generation network model for discrimination to obtain a discrimination result includes:
classifying and distinguishing the image training set and the first image generation set through the discriminator to obtain a classification result;
determining the discrimination probability of the discriminator for discriminating the first image generation set according to the classification result;
and comparing the identification probability with a preset probability to obtain a judgment result.
Further, the step of clipping the original gradient of the discriminator to obtain a clipping gradient includes:
acquiring an original gradient vector of each layer of neural network of the discriminator;
determining a clipping threshold corresponding to each layer of the neural network based on the original gradient vector;
and cutting the original gradient vector according to the cutting threshold value to obtain the cutting gradient corresponding to each layer of the neural network.
Further, the step of calculating the clipping threshold corresponding to each layer of the neural network based on the original gradient vector includes:
and determining a second-order norm of the original gradient vector, and calculating the cutting threshold according to the second-order norm.
Further, the step of clipping the original gradient vector according to the clipping threshold to obtain the clipping gradient corresponding to each layer of the neural network includes:
dividing the second-order norm by the clipping threshold value to obtain a first quotient value;
comparing the first quotient value with a first numerical value to obtain the maximum value of the first quotient value and the first numerical value;
and calculating the ratio of the original gradient vector to the maximum value to obtain the cutting gradient.
Further, the step of determining the added noise according to the clipping gradient includes:
acquiring Gaussian distribution of the random noise;
and calculating to obtain the noise value according to the Gaussian distribution and the cutting gradient.
Further, the step of calculating the privacy loss value of the countermeasure generation network model comprises:
calculating a differential privacy sensitivity according to the image training set and the first image generation number set;
and calculating the privacy loss value by adopting a Markov formula based on the differential privacy sensitivity.
In order to solve the above technical problem, an embodiment of the present application further provides a device for training a confrontation generation network model based on federal learning, which adopts the following technical solution:
the generation module is used for inputting random noise data into a generator of the countermeasure generation network model to obtain a first image generation number set;
the judgment module is used for inputting the image training set and the first image generation set into a discriminator of the confrontation generation network model for judgment to obtain a judgment result;
the cutting module is used for cutting the original gradient of the discriminator to obtain a cutting gradient when the discrimination result does not meet the preset condition, and determining the noise value of the added noise according to the cutting gradient;
the adjusting module is used for adjusting the model parameters of the discriminator according to the cutting gradient and the added noise and updating the model parameters of the generator according to the guidance of the discriminator after the model parameters are adjusted;
the generating module is further configured to input noise corresponding to the noise value into the generator after the parameter adjustment, so as to obtain a second image generation set;
the judgment module is further configured to input the image verification set and the second image generation set to the discriminator, calculate a privacy loss value of the countermeasure generation network model, and if the privacy loss value is not within a preset range, iteratively update the countermeasure generation network model until the privacy loss value falls within the preset range.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
the computer device includes a memory having computer readable instructions stored therein which, when executed by the processor, implement the steps of the federated learning-based countermeasure generation network model training method described above.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
the computer readable storage medium has stored thereon computer readable instructions which, when executed by a processor, implement the steps of the federated learning-based countermeasure generation network model training method described above.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the method comprises the steps of inputting random noise data into a generator of an antagonism generation network model to obtain a first image generation number set, inputting an image training set and the first image generation set into a discriminator of the antagonism generation network model for discrimination to obtain a discrimination result, when the discrimination result does not meet a preset condition, cutting an original gradient of the discriminator to obtain a cutting gradient, determining a noise value of added noise according to the cutting gradient, adjusting model parameters of the discriminator according to the cutting gradient and the noise value, updating the model parameters of the generator according to guidance of the discriminator after the model parameters are adjusted, inputting an image verification set into the generator after the parameters are adjusted to obtain a second image generation set, inputting an image verification set and the second image generation set into the discriminator, calculating a privacy loss value of the antagonism generation network model, and if the privacy loss value is not in a preset range, iteratively updating the antibiotic network model until the privacy loss value falls into a preset range; the first image generation set is generated after the characteristics of the original image data set are learned by self through random noise, so that the original image data set can be protected from being damaged; meanwhile, the generated image data set and the original image data set are not in one-to-one correspondence, so that an attacker cannot distinguish the authenticity of the data after acquiring the local original image data, and the privacy of the image data set of the medical institution can be effectively protected; in addition, the discriminator dynamically adjusts and adds noise, and the discrimination capability of the discriminator can be improved.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a federated learning-based confrontation-generating network model training method according to the present application;
FIG. 3 is a schematic structural diagram of the FA-GAN model of the present application;
FIG. 4 is a schematic structural diagram of one embodiment of a federated learning-based confrontation-generating network model training apparatus according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
In order to solve the problems that local medical image data are easy to decrypt and obtain and the security and privacy are not high in the related art, the application provides a method for training a confrontation generation network model based on federal learning, which relates to artificial intelligence and can be applied to a system architecture 100 shown in fig. 1, wherein the system architecture 100 can include terminal devices 101, 102 and 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
It should be noted that the method for training a confrontation generation network model based on federal learning provided in the embodiment of the present application is generally executed by a terminal device, and accordingly, a device for training a confrontation generation network model based on federal learning is generally installed in a terminal device.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow diagram of one embodiment of a federated learning-based confrontation-generating network model training method in accordance with the present application is shown, including the following steps:
step S201, inputting the random noise data into a generator of the countermeasure generation network model to obtain a first image generation set.
In this embodiment, the countermeasure generation network model is an FA-GAN (Flexible adaptive network) model, the FA-GAN model is a local training model for federal learning, and the process of federal learning is specifically described by taking the example that a medical institution a and a medical institution B participate in federal learning, so that the FA-GAN model based on federal learning includes the following processes:
1) the medical institution A and the medical institution B carry out FA-GAN model training locally by utilizing respective case databases;
2) the medical institution A and the medical institution B upload the trained model parameters and weights to a central server;
3) the central server collects the received weight information of the local models of the two parties, and performs joint training on the central server to generate a global model;
4) the central server updates the model parameters of the global model and then sends the updated model parameters to the medical institution A and the medical institution B, and the medical institution A and the medical institution B respectively update the model parameters;
5) repeating the above 4 steps until the stop condition is satisfied.
In this embodiment, referring to fig. 3, a structural schematic diagram of an FA-GAN model is shown, where the FA-GAN model includes a generator and a discriminator, and in order to ensure security of a local image data set in the federal learning process, the generator of the FA-GAN model automatically learns characteristics of an original image data set, rewrites the data set, and the discriminator dynamically adjusts noise scale in the training process to guide the generator, so as to improve availability and privacy of the local image data set without affecting the training effect of a final model.
The random noise data is derived from noise samples of a predefined noise distribution, typically a simple, easy to sample distribution, such as a Uniform distribution (Uniform (-1,1)) or a Gaussian distribution (Gaussian (0, 1)). In this embodiment, random noise data may be sampled from the fitting gaussian distribution and then input to the generator as the base data for the generator to generate the first image generation set.
Random noise data is input into a generator, the generator self-learns the data characteristics of the original image data set, and a first image generation set highly similar to the original image data set is output.
Step S202, inputting the image training set and the first image generation set into a discriminator of the confrontation generation network model for discrimination to obtain a discrimination result.
In this embodiment, an original image data set local to a medical institution is obtained, and the original image data set is divided into an image training set and an image verification set according to a predetermined ratio, for example, assuming that the original image data set includes 70000 images, 60000 images are used as the image training set, and 10000 images are used as the image verification set. Then, the image training set is divided into N training batches, wherein N is a natural number larger than zero.
The image training set is input to a discriminator in batches, the discriminator distinguishes the input image training set from the first image generation set, and gives whether it is true (from the true data image training set) or false (from the generator's generated data first image generation set).
Specifically, the image training set and the first image generation set are classified and judged through the discriminator to obtain a classification result, the discrimination probability of the first image generation set discriminated by the discriminator is determined according to the classification result, and the discrimination probability is compared with the preset probability to obtain a discrimination result.
The discriminator is a GCN (Graph Convolutional Network) for image feature extraction, and comprises an embedded layer, a Convolutional layer, a pooling layer and a softmax layer, and the discriminator can contact an original image data set in the training process, memorize certain training samples and increase the risk of attack. In order to ensure the safety of local image data, the self identification capability is improved through self-adaptive cutting gradient and dynamic noise distribution, namely the memory degree of the discriminator on the original image data set is reduced, and the identification capability of the discriminator is improved.
The discriminator classifies the input image training set and the first image generation set and outputs the probability of belonging to a real sample, wherein the real sample is the image training set.
Generally, a Sigmoid function is adopted as an output layer of the discriminator, and if the output of the discriminator is close to 1, the current data is judged to be from a real data set; if the output of the discriminator is close to 0, the current data is judged to come from the analog data generated by the generator. In this embodiment, if the output probability meets the preset range, it indicates that the discrimination capability of the discriminator meets the requirement, otherwise, the FA-GAN model is subjected to gradient clipping and noise value adding is determined, and the discrimination result is fed back to the generator and instructs the generator to update. Therefore, after the image data set is processed by the FA-GAN model, an attacker cannot judge whether noise is added into the data set or not, namely, the authenticity of the data cannot be distinguished, and therefore the safety of local data is protected.
It is emphasized that the video training set may also be stored in a node of a blockchain in order to further ensure privacy and security of the video training set.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
And step S203, when the judgment result does not meet the preset condition, cutting the original gradient of the discriminator to obtain a cutting gradient, and determining to add noise according to the cutting gradient.
Specifically, an original gradient vector of each layer of the neural network of the discriminator is obtained, a clipping threshold corresponding to each layer of the neural network is determined based on the original gradient vector, and the original gradient vector is clipped according to the clipping threshold to obtain a clipping gradient corresponding to each layer of the neural network.
In some optional implementation manners of this embodiment, the step of determining the clipping threshold corresponding to each layer of the neural network based on the original gradient vector specifically includes:
and determining a second-order norm of the original gradient vector, and calculating a cutting threshold value according to the second-order norm.
In this embodiment, the second-order norm is a 2-norm. The original gradient vector is g (x)i) The 2-norm of the original gradient vector is calculated, i.e., ║ g (x)i)║2Calculating a clipping threshold according to the 2-norm, wherein each iteration generates an original gradient vector, N2-norms corresponding to the mth layer of neural network in the N original gradient vectors are determined, and an average value of the N2-norms is determined as the clipping threshold, and the specific formula is as follows:
Figure BDA0003148772390000091
wherein, N is the current input Nth batch of image training set, and m is the mth layer neural network in the discriminator.
In this embodiment, the step of clipping the original gradient vector according to the clipping threshold to obtain the clipping gradient corresponding to each layer of neural network specifically includes:
dividing the second-order norm by a clipping threshold value to obtain a first quotient value;
comparing the first quotient value with the first numerical value to obtain the maximum value of the first quotient value and the first numerical value;
and calculating the ratio of the original gradient vector to the maximum value to obtain a cutting gradient.
Inputting the image training set and the first image generation set into the FA-GAN model, and training the FA-GAN model to obtain a clipping threshold C of each iterationxiFrom the original gradient vector g (x)i) Then the clipping gradient of the mth layer in the iteration can be calculated
Figure BDA0003148772390000101
The calculation formula specifically adopted is as follows:
Figure BDA0003148772390000102
wherein x isiIs the ith batch of image training set; g (x)i) The original gradient vector of the mth layer neural network layer obtained by training with the ith batch of image training set.
In some optional implementations of the present embodiment, the implementation step of determining the added noise according to the clipping gradient is as follows:
acquiring Gaussian distribution of random noise;
and calculating to obtain a noise value of the added noise according to the Gaussian distribution and the clipping gradient.
The noise value of the additive noise is specifically calculated by the following formula:
Figure BDA0003148772390000103
wherein xiFor the ith image training set, σ is the noise scale, i.e., the noise value, C is the gradient threshold,
Figure BDA0003148772390000104
for the clipping gradient after clipping according to the clipping threshold, S is the number of image training set data, N (0, σ)2C2l) denotes the noise obeys mean 0 and variance σ2C2And l is a Gaussian distribution, and l is an identity matrix with dimensionality related to the number of samples and the number of gradients and is used for the operation of the noise addition matrix.
And step S204, adjusting the model parameters of the discriminator according to the cutting gradient and the noise value, and guiding and updating the model parameters of the generator according to the discriminator after the model parameters are adjusted.
Specifically, the original gradient vector of each neural network layer is cut according to the cutting threshold value of each neural network layer to obtain the cutting gradient of each neural network layer, the noise required to be added of the discriminator is dynamically adjusted according to the noise value, finally, the model parameter of the discriminator is adjusted according to the cutting gradient and the adjusted noise, and the discriminator after adjusting the model parameter instructs the generator to update the corresponding model parameter.
In the embodiment, the local data in the FA-GAN model training process is subjected to privacy protection through differential privacy. With differential privacy, a target privacy budget needs to be given, and for a given target privacy budget, each round of noise addition causes overall privacy budget consumption. And adjusting the model parameters of the discriminator according to the preset learning rate and the noise added in the gradient, so that the adjustment of the model parameters meets the difference privacy.
In step S205, noise corresponding to the noise value is input into the generator after the parameter adjustment, so as to obtain a second image generation set.
And after the added noise value is determined, inputting the noise corresponding to the noise value into a generator after the parameters are adjusted, enabling the generator to self-learn the data characteristics of the original image data set, and outputting a second image generation set highly similar to the original image data set.
And S206, inputting the image verification set and the second image generation set into a discriminator, calculating a privacy loss value of the countermeasure generation network model, and if the privacy loss value is not in a preset range, iteratively updating the countermeasure generation network model until the privacy loss value falls in the preset range.
And the image verification set is used for verifying the trained FA-GAN model, and the image verification set and the second image generation set are input to the discriminator for verification.
In the present embodiment, whether the FA-GAN model is trained is confirmed by calculating the privacy loss value.
Specifically, the step of calculating the privacy loss value against the generative network model is as follows:
calculating a differential privacy sensitivity according to the image training set and the first image generation number set;
and calculating a privacy loss value by adopting a Markov formula based on the differential privacy sensitivity.
The method comprises the following steps that gradient clipping and noise addition bring about the estimation problem of sensitivity, the sensitivity determines how much random noise needs to be added into a result of the gradient clipping and noise addition to achieve differential privacy, and the differential privacy is used as a privacy protection method and is used for protecting the safety of local data and avoiding data leakage.
In this embodiment, the differential privacy sensitivity is calculated as follows:
Figure BDA0003148772390000111
wherein D represents a data set without noise added in the network layer, namely an image verification set; d' represents a data set to which noise is added by the neural network layer, namely a second image generation set; (d) sensitivity to no added noise; f (D') represents the sensitivity to which noise has been added.
According to the differential privacy sensitivity, the overall privacy budget consumption is affected by adding noise in each round, and the consumption in each round needs to be accurately estimated to reach the lowest overall privacy budget consumption.
The calculation formula of the privacy loss value is as follows:
Figure BDA0003148772390000121
wherein M is a given random system algorithm, d and d' are a pair of adjacent data sets, aux represents an auxiliary input, o is an output result, and satisfies o e R; pr (M (aux, d) ═ o) denotes the probability that the output belongs to dataset d; pr (M (aux, d ') ═ o) represents the probability that the output belongs to dataset d'. The formula can obtain the change of the privacy loss value under the consumption of the target gradient privacy budget, the performance of the added noise on the whole model is measured according to the change of the privacy loss value, the layered gradient clipping and the dynamic noise adjustment have better effects in the training process and the final result along with the increase of the number of training rounds, and the final dynamic adjustment is the optimal privacy budget allocation.
It should be noted that the privacy loss value needs to be minimized as much as possible to obtain a better privacy protection effect.
The first image generation set is generated after the characteristics of the original image data set are learned by self through random noise, so that the original image data set can be protected from being damaged; meanwhile, the generated image data set and the original image data set are not in one-to-one correspondence, so that an attacker cannot distinguish the authenticity of the data after acquiring the local original image data, and the privacy of the image data set of the medical institution can be effectively protected; in addition, the discriminator dynamically adjusts and adds noise, and the discrimination capability of the discriminator can be improved.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The application can be applied to the field of intelligent security and protection, and therefore the construction of an intelligent city is promoted.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, the processes of the embodiments of the methods described above can be included. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 4, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a device for training a federated learning-based confrontation generation network model, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 2, and the device may be applied to various electronic devices in particular.
As shown in fig. 4, the apparatus 400 for training a federated learning-based confrontation generation network model according to this embodiment includes: a generating module 401, a judging module 402, a clipping module 403 and an adjusting module 404.
Wherein:
the generation module 401 is configured to input random noise data into a generator of the countermeasure generation network model to obtain a first image generation number set;
the judging module 402 is configured to input the image training set and the first image generation set into a discriminator of the countermeasure generation network model for judgment to obtain a judgment result;
the cutting module 403 is configured to, when the determination result does not satisfy a preset condition, cut the original gradient of the determiner to obtain a cutting gradient, and determine a noise value to which noise is added according to the cutting gradient;
the adjusting module 404 is configured to adjust a model parameter of the discriminator according to the clipping gradient and the noise value, and update the model parameter of the generator according to guidance of the discriminator after adjusting the model parameter;
the generating module 401 is further configured to input noise corresponding to the noise value into the generator after the parameter adjustment, so as to obtain a second image generation set;
the determination module 402 is further configured to input the image verification set and the second image generation set to the determiner, calculate a privacy loss value of the countermeasure generation network model, and if the privacy loss value is not within a preset range, iteratively update the countermeasure generation network model until the privacy loss value falls within a preset range.
It is emphasized that the video training set may also be stored in a node of a blockchain in order to further ensure privacy and security of the video training set.
According to the confrontation generation network model training device based on the federal learning, the first image generation set is generated by adding the characteristics of the random noise self-learning original image data set, so that the original image data set can be protected from being damaged; meanwhile, the generated image data set and the original image data set are not in one-to-one correspondence, so that an attacker cannot distinguish the authenticity of the data after acquiring the local original image data, and the privacy of the image data set of the medical institution can be effectively protected; in addition, the discriminator dynamically adjusts and adds noise, and the discrimination capability of the discriminator can be improved.
In this embodiment, the determining module 402 is further configured to:
classifying and distinguishing the image training set and the first image generation set through the discriminator to obtain a classification result;
determining the discrimination probability of the discriminator for discriminating the first image generation set according to the classification result;
and comparing the identification probability with a preset probability to obtain a judgment result.
In this embodiment, the clipping module 403 is further configured to:
acquiring an original gradient vector of each layer of neural network of the discriminator;
determining a clipping threshold corresponding to each layer of the neural network based on the original gradient vector;
and cutting the original gradient vector according to the cutting threshold value to obtain the cutting gradient corresponding to each layer of the neural network.
In some optional implementations of this embodiment, the clipping module 403 is further configured to determine a second-order norm of the original gradient vector, and calculate the clipping threshold according to the second-order norm.
In some optional implementations of this embodiment, the clipping module 403 is further configured to:
dividing the second-order norm by the clipping threshold value to obtain a first quotient value;
comparing the first quotient value with a first numerical value to obtain the maximum value of the first quotient value and the first numerical value;
and calculating the ratio of the original gradient vector to the maximum value to obtain the cutting gradient.
In this embodiment, the clipping module 403 is configured to:
acquiring Gaussian distribution of the random noise;
and calculating to obtain the noise value according to the Gaussian distribution and the cutting gradient.
In this embodiment, the determining module 402 further includes a calculating submodule, configured to:
calculating a differential privacy sensitivity according to the image training set and the first image generation number set;
and calculating the privacy loss value by adopting a Markov formula based on the differential privacy sensitivity.
In the embodiment, the privacy loss value and the change of the privacy loss value are calculated, the performance of the added noise on the whole model is measured according to the change of the privacy loss value, the layered gradient clipping and the dynamic noise adjustment have better effects in the training process and the final result along with the increase of the number of training rounds, and the final dynamic adjustment is the optimal privacy budget allocation.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 5, fig. 5 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 5 comprises a memory 51, a processor 52, a network interface 53 communicatively connected to each other via a system bus. It is noted that only a computer device 5 having components 51-53 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 51 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 51 may be an internal storage unit of the computer device 5, such as a hard disk or a memory of the computer device 5. In other embodiments, the memory 51 may also be an external storage device of the computer device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 5. Of course, the memory 51 may also comprise both an internal storage unit of the computer device 5 and an external storage device thereof. In this embodiment, the memory 51 is generally used for storing an operating system and various types of application software installed on the computer device 5, such as computer readable instructions of the federated learning-based countermeasure generation network model training method. Further, the memory 51 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 52 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 52 is typically used to control the overall operation of the computer device 5. In this embodiment, the processor 52 is configured to execute computer readable instructions stored in the memory 51 or process data, such as computer readable instructions for executing the federated learning-based countermeasure generation network model training method.
The network interface 53 may comprise a wireless network interface or a wired network interface, and the network interface 53 is generally used for establishing communication connections between the computer device 5 and other electronic devices.
In the embodiment, when the processor executes the computer readable instructions stored in the memory, the procedure of the federal learning-based confrontation generation network model training method in the embodiment is realized, and the original image data set can be protected from being damaged by adding the features of the random noise self-learning original image data set to generate the first image generation set; meanwhile, the generated image data set and the original image data set are not in one-to-one correspondence, so that an attacker cannot distinguish the authenticity of the data after acquiring the local original image data, and the privacy of the image data set of the medical institution can be effectively protected; in addition, the discriminator dynamically adjusts and adds noise, and the discrimination capability of the discriminator can be improved.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the federal learning based confrontation generation network model training method as described above, wherein the original image data set can be protected from being damaged by adding random noise to self-learn the characteristics of the original image data set and then generating a first image generation set; meanwhile, the generated image data set and the original image data set are not in one-to-one correspondence, so that an attacker cannot distinguish the authenticity of the data after acquiring the local original image data, and the privacy of the image data set of the medical institution can be effectively protected; in addition, the discriminator dynamically adjusts and adds noise, and the discrimination capability of the discriminator can be improved.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A method for training an confrontation generation network model based on federal learning is characterized by comprising the following steps:
inputting random noise data into a generator of a countermeasure generation network model to obtain a first image generation set;
inputting the image training set and the first image generation set into a discriminator of the countermeasure generation network model for discrimination to obtain a discrimination result;
when the judgment result does not meet the preset condition, cutting the original gradient of the discriminator to obtain a cutting gradient, and determining a noise value of the added noise according to the cutting gradient;
adjusting the model parameters of the discriminator according to the cutting gradient and the noise value, and updating the model parameters of the generator according to the guidance of the discriminator after adjusting the model parameters;
inputting the noise corresponding to the noise value into the generator after the parameters are adjusted to obtain a second image generation set;
inputting an image verification set and the second image generation set into the discriminator, calculating a privacy loss value of the countermeasure generation network model, and if the privacy loss value is not in a preset range, iteratively updating the countermeasure generation network model until the privacy loss value falls in the preset range.
2. The method of claim 1, wherein the step of inputting the image training set and the first image generation set into a discriminator of the countermeasure generating network model for discrimination to obtain a discrimination result comprises:
classifying and distinguishing the image training set and the first image generation set through the discriminator to obtain a classification result;
determining the discrimination probability of the discriminator for discriminating the first image generation set according to the classification result;
and comparing the identification probability with a preset probability to obtain a judgment result.
3. The method of claim 1, wherein the step of clipping the raw gradient of the arbiter to obtain a clipping gradient comprises:
acquiring an original gradient vector of each layer of neural network of the discriminator;
determining a clipping threshold corresponding to each layer of the neural network based on the original gradient vector;
and cutting the original gradient vector according to the cutting threshold value to obtain the cutting gradient corresponding to each layer of the neural network.
4. The method of claim 3, wherein the step of calculating a clipping threshold for each layer of the neural network based on the raw gradient vectors comprises:
and determining a second-order norm of the original gradient vector, and calculating the cutting threshold according to the second-order norm.
5. The method of claim 4, wherein the step of clipping the original gradient vectors according to the clipping threshold to obtain the clipping gradient corresponding to each layer of the neural network comprises:
dividing the second-order norm by the clipping threshold value to obtain a first quotient value;
comparing the first quotient value with a first numerical value to obtain the maximum value of the first quotient value and the first numerical value;
and calculating the ratio of the original gradient vector to the maximum value to obtain the cutting gradient.
6. The method of claim 5, wherein the step of determining additive noise according to the clipping gradient comprises:
acquiring Gaussian distribution of the random noise;
and calculating to obtain the noise value according to the Gaussian distribution and the cutting gradient.
7. The federated learning-based confrontation generation network model training method of claim 1, wherein the step of calculating the privacy loss value of the confrontation generation network model comprises:
calculating a differential privacy sensitivity according to the image training set and the first image generation number set;
and calculating the privacy loss value by adopting a Markov formula based on the differential privacy sensitivity.
8. A confrontation generation network model training device based on federal learning is characterized by comprising:
the generation module is used for inputting random noise data into a generator of the countermeasure generation network model to obtain a first image generation number set;
the discrimination module is used for inputting the image training set and the first image generation set into a discriminator of the confrontation generation network model for discrimination to obtain a discrimination result;
the cutting module is used for cutting the original gradient of the discriminator to obtain a cutting gradient when the discrimination result does not meet the preset condition, and determining the noise value of the added noise according to the cutting gradient;
the adjusting module is used for adjusting the model parameters of the discriminator according to the cutting gradient and the noise value and updating the model parameters of the generator according to the guidance of the discriminator after the model parameters are adjusted;
the generating module is further configured to input noise corresponding to the noise value into the generator after the parameter adjustment, so as to obtain a second image generation set;
the judgment module is further configured to input the image verification set and the second image generation set to the discriminator, calculate a privacy loss value of the countermeasure generation network model, and if the privacy loss value is not within a preset range, iteratively update the countermeasure generation network model until the privacy loss value falls within the preset range.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor that when executed implement the steps of the federal learning based confrontation generation network model training method of any of claims 1 to 7.
10. A computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the steps of the federal learning based confrontation generation network model training method of any of claims 1 to 7.
CN202110758657.4A 2021-07-05 2021-07-05 Federal learning-based countermeasure generation network model training method and related equipment thereof Active CN113435583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110758657.4A CN113435583B (en) 2021-07-05 2021-07-05 Federal learning-based countermeasure generation network model training method and related equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110758657.4A CN113435583B (en) 2021-07-05 2021-07-05 Federal learning-based countermeasure generation network model training method and related equipment thereof

Publications (2)

Publication Number Publication Date
CN113435583A true CN113435583A (en) 2021-09-24
CN113435583B CN113435583B (en) 2024-02-09

Family

ID=77759113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110758657.4A Active CN113435583B (en) 2021-07-05 2021-07-05 Federal learning-based countermeasure generation network model training method and related equipment thereof

Country Status (1)

Country Link
CN (1) CN113435583B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113961967A (en) * 2021-12-13 2022-01-21 支付宝(杭州)信息技术有限公司 Method and device for jointly training natural language processing model based on privacy protection
CN114169007A (en) * 2021-12-10 2022-03-11 西安电子科技大学 Medical privacy data identification method based on dynamic neural network
CN114239860A (en) * 2021-12-07 2022-03-25 支付宝(杭州)信息技术有限公司 Model training method and device based on privacy protection
CN114912624A (en) * 2022-04-12 2022-08-16 支付宝(杭州)信息技术有限公司 Longitudinal federal learning method and device for business model
CN115426205A (en) * 2022-11-05 2022-12-02 北京淇瑀信息科技有限公司 Encrypted data generation method and device based on differential privacy
WO2024001283A1 (en) * 2022-06-30 2024-01-04 商汤集团有限公司 Training method for image processing network, and image processing method and apparatus
CN117788983A (en) * 2024-02-28 2024-03-29 青岛海尔科技有限公司 Image data processing method and device based on large model and storage medium
CN117936011A (en) * 2024-03-19 2024-04-26 泰山学院 Intelligent medical service management system based on big data
CN117993480A (en) * 2024-04-02 2024-05-07 湖南大学 AIGC federal learning method for designer style fusion and privacy protection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000158204A (en) * 1998-11-24 2000-06-13 Mitsubishi Materials Corp Surface-covering cemented carbide alloy cutting tool having hard covering layer exhibiting excellent chipping resistance
CN110147797A (en) * 2019-04-12 2019-08-20 中国科学院软件研究所 A kind of sketch completion and recognition methods and device based on production confrontation network
CN110969243A (en) * 2019-11-29 2020-04-07 支付宝(杭州)信息技术有限公司 Method and device for training countermeasure generation network for preventing privacy leakage
WO2020134704A1 (en) * 2018-12-28 2020-07-02 深圳前海微众银行股份有限公司 Model parameter training method based on federated learning, terminal, system and medium
CN112070209A (en) * 2020-08-13 2020-12-11 河北大学 Stable controllable image generation model training method based on W distance
US20210049298A1 (en) * 2019-08-14 2021-02-18 Google Llc Privacy preserving machine learning model training

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000158204A (en) * 1998-11-24 2000-06-13 Mitsubishi Materials Corp Surface-covering cemented carbide alloy cutting tool having hard covering layer exhibiting excellent chipping resistance
WO2020134704A1 (en) * 2018-12-28 2020-07-02 深圳前海微众银行股份有限公司 Model parameter training method based on federated learning, terminal, system and medium
CN110147797A (en) * 2019-04-12 2019-08-20 中国科学院软件研究所 A kind of sketch completion and recognition methods and device based on production confrontation network
US20210049298A1 (en) * 2019-08-14 2021-02-18 Google Llc Privacy preserving machine learning model training
CN110969243A (en) * 2019-11-29 2020-04-07 支付宝(杭州)信息技术有限公司 Method and device for training countermeasure generation network for preventing privacy leakage
CN112070209A (en) * 2020-08-13 2020-12-11 河北大学 Stable controllable image generation model training method based on W distance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李英;贺春林;: "面向深度神经网络训练的数据差分隐私保护随机梯度下降算法", 计算机应用与软件, no. 04 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114239860A (en) * 2021-12-07 2022-03-25 支付宝(杭州)信息技术有限公司 Model training method and device based on privacy protection
CN114169007A (en) * 2021-12-10 2022-03-11 西安电子科技大学 Medical privacy data identification method based on dynamic neural network
CN114169007B (en) * 2021-12-10 2024-05-14 西安电子科技大学 Medical privacy data identification method based on dynamic neural network
CN113961967A (en) * 2021-12-13 2022-01-21 支付宝(杭州)信息技术有限公司 Method and device for jointly training natural language processing model based on privacy protection
CN113961967B (en) * 2021-12-13 2022-03-22 支付宝(杭州)信息技术有限公司 Method and device for jointly training natural language processing model based on privacy protection
CN114912624A (en) * 2022-04-12 2022-08-16 支付宝(杭州)信息技术有限公司 Longitudinal federal learning method and device for business model
WO2024001283A1 (en) * 2022-06-30 2024-01-04 商汤集团有限公司 Training method for image processing network, and image processing method and apparatus
CN115426205A (en) * 2022-11-05 2022-12-02 北京淇瑀信息科技有限公司 Encrypted data generation method and device based on differential privacy
CN117788983A (en) * 2024-02-28 2024-03-29 青岛海尔科技有限公司 Image data processing method and device based on large model and storage medium
CN117788983B (en) * 2024-02-28 2024-05-24 青岛海尔科技有限公司 Image data processing method and device based on large model and storage medium
CN117936011A (en) * 2024-03-19 2024-04-26 泰山学院 Intelligent medical service management system based on big data
CN117993480A (en) * 2024-04-02 2024-05-07 湖南大学 AIGC federal learning method for designer style fusion and privacy protection

Also Published As

Publication number Publication date
CN113435583B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN113435583B (en) Federal learning-based countermeasure generation network model training method and related equipment thereof
WO2021179720A1 (en) Federated-learning-based user data classification method and apparatus, and device and medium
EP3627759B1 (en) Method and apparatus for encrypting data, method and apparatus for training machine learning model, and electronic device
WO2021155713A1 (en) Weight grafting model fusion-based facial recognition method, and related device
WO2021120677A1 (en) Warehousing model training method and device, computer device and storage medium
CN110929799B (en) Method, electronic device, and computer-readable medium for detecting abnormal user
CN110969243B (en) Method and device for training countermeasure generation network for preventing privacy leakage
CN112863683A (en) Medical record quality control method and device based on artificial intelligence, computer equipment and storage medium
CN113449783A (en) Countermeasure sample generation method, system, computer device and storage medium
WO2023071105A1 (en) Method and apparatus for analyzing feature variable, computer device, and storage medium
CN112668482B (en) Face recognition training method, device, computer equipment and storage medium
CN112766649A (en) Target object evaluation method based on multi-scoring card fusion and related equipment thereof
CN112035549A (en) Data mining method and device, computer equipment and storage medium
CN110602120A (en) Network-oriented intrusion data detection method
CN113919401A (en) Modulation type identification method and device based on constellation diagram characteristics and computer equipment
CN114494747A (en) Model training method, image processing method, device, electronic device and medium
CN110197078B (en) Data processing method and device, computer readable medium and electronic equipment
CN114282258A (en) Screen capture data desensitization method and device, computer equipment and storage medium
CN115099875A (en) Data classification method based on decision tree model and related equipment
CN114117037A (en) Intention recognition method, device, equipment and storage medium
CN112733645A (en) Handwritten signature verification method and device, computer equipment and storage medium
CN112071331A (en) Voice file repairing method and device, computer equipment and storage medium
CN112417886A (en) Intention entity information extraction method and device, computer equipment and storage medium
CN113726785B (en) Network intrusion detection method and device, computer equipment and storage medium
CN113298747A (en) Picture and video detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant