CN115660037A - Method for distinguishing through deep neural network model - Google Patents

Method for distinguishing through deep neural network model Download PDF

Info

Publication number
CN115660037A
CN115660037A CN202211317661.8A CN202211317661A CN115660037A CN 115660037 A CN115660037 A CN 115660037A CN 202211317661 A CN202211317661 A CN 202211317661A CN 115660037 A CN115660037 A CN 115660037A
Authority
CN
China
Prior art keywords
network
sentence
vector
discrimination
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211317661.8A
Other languages
Chinese (zh)
Inventor
吴穗湘
李志军
林宪春
嵇志国
王峰
邱畅
黄振声
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGDONG ORIENTAL THOUGHT TECHNOLOGY CO LTD
Original Assignee
GUANGDONG ORIENTAL THOUGHT TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGDONG ORIENTAL THOUGHT TECHNOLOGY CO LTD filed Critical GUANGDONG ORIENTAL THOUGHT TECHNOLOGY CO LTD
Priority to CN202211317661.8A priority Critical patent/CN115660037A/en
Publication of CN115660037A publication Critical patent/CN115660037A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Machine Translation (AREA)

Abstract

The invention provides a method for distinguishing through a deep neural network model, which comprises the following processing steps of 1: respectively inputting the true sentence embedding vector and the false sentence embedding vector into a discrimination network for discrimination, so that the discrimination accuracy of the discrimination network reaches a set value; and 2, step: the degree that sentence embedding vectors generated by the generated network are close to the distribution of real samples is continuously improved, and the accuracy of generating the network generated sentences is improved; and step 3: after the accuracy rate of generating network generated sentences reaches a set value, generating a set number of sentence embedding vectors through a generating network and averaging to obtain a category center of a target type; and 4, step 4: when data is input, the distance between a sentence embedding vector generated by the data and the center of the category is calculated, and if the distance exceeds a set threshold, the sentence embedding vector is judged to be a non-corresponding type, and if the distance is lower than the set threshold, the sentence embedding vector is judged to be a corresponding type. The invention can effectively ensure the safety of information playing.

Description

Method for distinguishing through deep neural network model
Technical Field
The invention belongs to the field of information analysis, and particularly relates to a method for distinguishing through a deep neural network model.
Background
In public places, information needing to be announced is played through the display panel. These existing playback systems do not have corresponding safeguards. In the using process, the communication protocol of each manufacturer mainly adopts plaintext transmission, and no information safety measures exist in links of information encoding, transmission, exchange, processing and the like. Therefore, people can easily play bad information or false information, which can bring bad influence to the public and influence social order. Therefore, a method for distinguishing through a deep neural network model is needed to meet the use requirement.
Disclosure of Invention
The invention aims to provide a method for distinguishing through a deep neural network model, which can effectively ensure the safety of information playing.
To achieve the object, there is provided a method for discriminating by a deep neural network model, including a generating network for generating sentence-embedding vectors and a discriminating network connected to the generating network for discriminating authenticity of an input sentence-embedding vector, the method including the processing steps of,
step 1: generating a true sentence embedding vector of a real use sample and a pseudo sentence embedding vector of a non-use sample through a generating network, then sequentially inputting the true sentence embedding vector and the pseudo sentence embedding vector into a judging network respectively for judgment, wherein in the judging network judgment process, parameters of the generating network are kept unchanged, and only the parameters of the judging network are adjusted, so that the judgment accuracy of the judging network is continuously improved until a set value or convergence is reached;
step 2: through the objective function for continuously reducing the discrimination accuracy of the discrimination network, the degree that the sentence embedding vector generated by the generated network is close to the distribution of a real sample is continuously improved, in the process, the parameter of the discrimination network is fixed, and only the parameter of the generated network is adjusted through the objective function until a set value is reached or the convergence is reached;
and step 3: alternately performing the step 1 and the step 2 to ensure that the accuracy of the generated network and the judgment network in the countermeasure is improved together until the pseudo sentence embedded vector and the real sentence embedded vector generated by the generated network are input into the judgment network, and when the judgment network cannot distinguish the real sentence embedded vector or the pseudo sentence embedded vector, the accuracy of the generated network is considered to reach a set value to meet the requirement of actual use, and at the moment, stopping alternately performing the step 1 and the step 2, and then generating a set number of sentence embedded vectors through the generated network and averaging to obtain the category center of the target type;
and 4, step 4: when data is input, the distance between a sentence embedding vector generated by the data and the center of the category is calculated, and if the distance exceeds a set threshold, the sentence embedding vector is judged to be a non-corresponding type, and if the distance is lower than the set threshold, the sentence embedding vector is judged to be a corresponding type.
Preferably, in step 1, the discrimination network calculates negative log-likelihoods of distributions of the true sentence embedding vector and the pseudo sentence embedding vector by respectively calculating x = (x) for each of the input embedding vectors, i.e., the true sentence embedding vector or the pseudo sentence embedding vector 1 ,x 2 ,…,x n ) Wherein x is 1 ,x 2 ,…,x n Each element of the vector is represented separately, and the subscript n represents the total common n-dimension, a typical value for this n being 768, although other positive integers are possible. The discrimination network starts input x from an input layer through a plurality of middle hidden layers in a forward propagation mode, and each layer adopts the weight parameters of the layer to carry out weighted summation on the input x
Figure BDA0003910053440000021
Wherein x is i The value of the ith dimension, also called the feature, w, representing the input vector of the layer i The representation corresponds to the feature x i K being the input dimension of the layer, k = n for an input layer, and a = δ (z) is activated via an activation function δ,extracting features layer by layer, finally outputting discrimination probability in an output layer through a softmax function and recording the discrimination probability as y, and taking a negative log likelihood-log (y) as a target and recording the negative log likelihood-log (y) as J (w);
then, a weight parameter and a bias parameter of the discrimination network are set by using a gradient back propagation method, and the weight parameter and the bias parameter are adjusted towards the direction of reducing the negative log likelihood, so that the discrimination accuracy of the discrimination network reaches a set value.
Preferably, the formula set using the gradient back propagation method is w new =w old Eta. J (w), wherein w old Is a weight parameter before update, eta is a learning rate, J (w) is a gradient, and a minus sign represents that the updated weight parameter w is set in the direction in which the gradient descends new The parameters are iteratively updated by the method.
Preferably, in step 2, the objective function calculation formula is as follows,
Figure BDA0003910053440000031
wherein the content of the first and second substances,
Figure BDA0003910053440000032
representing the value function of the alternate training generation network and the discrimination network, D representing the discrimination network, and G representing the generation network; d (x) represents the discrimination output probability of the discrimination network on the input data x, and G (z) represents the mapping from Gaussian noise z sampled in a hidden vector space of the generation network to a pseudo-sentence embedding vector; x to P data (x) Representing the spatial distribution of x samples from the true sentence, z-P z (z) represents a gaussian distribution of z-samples from the hidden vector space; first item on right side of formula
Figure BDA0003910053440000035
The second term on the right of the formula represents the expectation of the log likelihood of the true sentence judged as the true sentence by the judgment network
Figure BDA0003910053440000036
Gaussian noise sampled on a hidden vector space is mapped into a pseudo sentence embedding vector through a generating network, and is judged to be expectation of log likelihood of a pseudo sentence through a judging network.
Preferably, when data is input, the generating network samples a corresponding gaussian noise z in a hidden vector space according to input data x and maps the gaussian noise z to a high-dimensional text representation feature G (z), and then obtains a distance through a judgment network calculation for judgment, wherein the calculation formula is as follows:
Figure BDA0003910053440000033
Figure BDA0003910053440000034
Figure BDA0003910053440000041
wherein the content of the first and second substances,
Figure BDA0003910053440000042
representing the residual of the real data and the generated data,
Figure BDA0003910053440000043
representing the difference of the two through the extraction result of the characteristics of the middle layer of the discriminator, f (x) representing the characteristics of the last layer of logic output by the discrimination network, lambda being used for weighting representation,
Figure BDA0003910053440000044
then is
Figure BDA0003910053440000045
And
Figure BDA0003910053440000046
the weighted average of (a) is calculated,
Figure BDA0003910053440000047
and
Figure BDA0003910053440000048
the value range is between 0 and 1.
Preferably, the hidden vector space takes a set dimension gaussian noise space.
Compared with the prior art, the invention has the beneficial effects that:
the invention can effectively ensure the safety of information playing by the cooperation of the generation network and the judgment network. In the invention, the generating network samples a corresponding Gaussian noise z in the hidden vector space according to the input number x, so that the external interference is reduced in the judgment process, and the judgment accuracy is improved. The formula for judging the distance obtained by the network calculation can calculate and judge the abnormal value of the input data, and can be used for enabling the network to learn the positive sample characteristics during training, so that the accuracy of system judgment is improved.
Drawings
FIG. 1 is a block diagram of the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following descriptions.
As shown in fig. 1, the present invention provides a method for discriminating through a deep neural network model, including a generating network for generating sentence-embedding vectors and a discriminating network connected to the generating network for discriminating the authenticity of an input sentence-embedding vector, the method including the processing steps of,
step 1: generating a true sentence embedding vector of a real use sample and a pseudo sentence embedding vector of a non-use sample by a generating network, then sequentially inputting the true sentence embedding vector and the pseudo sentence embedding vector into a judging network respectively for judgment, wherein in the judging network judging process, the parameters of the generating network are kept unchanged, and only the parameters of the judging network are adjusted, so that the judging accuracy of the judging network is continuously improved until a set value or convergence is reached;
step 2: through the objective function for continuously reducing the discrimination accuracy of the discrimination network, the degree that the sentence embedding vector generated by the generated network is close to the distribution of a real sample is continuously improved, in the process, the parameter of the discrimination network is fixed, and only the parameter of the generated network is adjusted through the objective function until a set value is reached or the convergence is reached;
and step 3: alternately performing the step 1 and the step 2 to ensure that the accuracy of the generated network and the judgment network in the countermeasure is improved together until the pseudo sentence embedded vector and the real sentence embedded vector generated by the generated network are input into the judgment network, and when the judgment network cannot distinguish the real sentence embedded vector or the pseudo sentence embedded vector, the accuracy of the generated network is considered to reach a set value to meet the requirement of actual use, and at the moment, stopping alternately performing the step 1 and the step 2, and then generating a set number of sentence embedded vectors through the generated network and averaging to obtain the category center of the target type; wherein the generating network can generate 150 sentence embedded vectors to improve accuracy;
and 4, step 4: when data is input, the distance between a sentence embedding vector generated by the data and the center of the category is calculated, and if the distance exceeds a set threshold, the type is determined as a non-corresponding type, and if the distance is lower than the set threshold, the type is determined as a corresponding type.
In step 1, the discrimination network calculates the negative log-likelihoods of the true sentence embedding vectors and the pseudo sentence embedding vectors distribution, respectively, by recording x = (x) for each input embedding vector, i.e., true sentence embedding vector or pseudo sentence embedding vector 1 ,x 2 ,…,x n ) Wherein x is 1 ,x 2 ,…,x n Respectively representing each element of the vector, subscript n represents total n dimensions, the discrimination network makes input x pass through a plurality of hidden layers from an input layer in a forward propagation mode, and each layer adopts the weight parameters of the layer to carry out weighted summation on the input x
Figure BDA0003910053440000051
Wherein x is i The value of the ith dimension, also called the feature, w, representing the input vector of the layer i The representation corresponds to the feature x i K is the input of the layerDimensionality, namely k = n for an input layer, activating a = δ (z) through an activation function δ to extract features layer by layer, finally outputting a discrimination probability in an output layer through a softmax function and recording the discrimination probability as y, and taking a negative log likelihood-log (y) as a target and recording the negative log likelihood-log (y) as J (w);
then, a weight parameter and a bias parameter of the discrimination network are set by using a gradient back propagation method, and the weight parameter and the bias parameter are adjusted towards the direction of reducing the negative log likelihood, so that the discrimination accuracy of the discrimination network reaches a set value. The formula set using the gradient back propagation method is w new =w old - η · J (w), wherein w old Is a weight parameter before update, eta is a learning rate, J (w) is a gradient, and a minus sign represents that the updated weight parameter w is set in the direction in which the gradient descends new The parameters are iteratively updated by the method.
In this embodiment, in step 2, according to the principle of the confrontation network, the parameters of the generation network are updated in the direction of reducing the accuracy of the discrimination network, so that the pseudo-sentence embedding vector obtained by generating the network mapping gradually approaches the distribution of the real sample, thereby increasing the accuracy of generating the network generated sentence.
In step 2, the objective function calculation formula is as follows,
Figure BDA0003910053440000061
wherein the content of the first and second substances,
Figure BDA0003910053440000062
representing the value function of alternately training the generation network and the discrimination network, D representing the discrimination network, and G representing the generation network; d (x) represents the discrimination output probability of the discrimination network on the input data x, and G (z) represents the mapping from Gaussian noise z sampled in a hidden vector space of the generation network to a pseudo-sentence embedding vector; x to P data (x) Representing the spatial distribution of x samples from the true sentence, z-P z (z) represents a gaussian distribution of z-samples from the hidden vector space; first item on right of formula
Figure BDA0003910053440000063
The second term on the right of the formula represents the expectation of the log likelihood of the true sentence judged as the true sentence by the judgment network
Figure BDA0003910053440000064
Gaussian noise sampled on a hidden vector space is mapped into a pseudo sentence embedding vector through a generating network, and is judged to be expectation of log likelihood of a pseudo sentence through a judging network. The true sentence space consists of a large number of true sample sentences.
In this embodiment, the generation network G and the discrimination network D are unknown networks, and the network parameters thereof are just the optimization parameters that we need to find, so as to achieve the accuracy rate we need; d (x) is the output of the discrimination network D, which is known given input x; g (z) is a pseudo vector generated by a generation network on z sampled from a hidden space, and when z is determined, the G (z) vector is uniquely determined; x to P data (x) That is, random variables of sampling true sentence space, x obeys a certain probability distribution; z to P z (z) is a random variable sampled from the hidden space, z obeys a certain probability distribution, which is generally known, e.g., gaussian. During operation, a generation network G and a discrimination network D are provided, a batch of true sentences are given as samples and are marked as X = { X = (X) = 1 ,X 2 ,…,X N N samples in total in the sample set X.
First, randomly initializing parameters of the generation network G and the discrimination network D, for example, initializing each parameter by using a gaussian random function having a mean value of 0 and a variance of 1. The generating network G and the discriminating network D at this time do not have any useful generating and discriminating capabilities because they are random.
Secondly, firstly fixing the parameters of the generated network G, sampling in a hidden space to obtain a low-dimensional vector z, and setting z = (z) 1 ,z 2 ,…,z 100 ) Generating a pseudo-sentence embedding vector X by using a z input generation network G fake This hidden space can be any distribution, assumed to be gaussian, while generating the true sentence embedding vector X from the samples real 。X fake And X real The loss delta of the network D can be calculated according to the labels by inputting the corresponding labels real and fake which are known in advance into the network D, and the parameter of the network D can be updated by reversely transmitting the delta through the gradient, so that the accuracy of the D is improved. And when the accuracy of the network D is judged to be improved to a set value so that real and fake can be accurately judged, suspending the training D and starting the training G.
Again, X was obtained by the same procedure as above fake And X real However, at this time, the parameters of the network D are determined, and the parameters of the generated network G are adjusted, so that the determination capability of the previously obtained determination network D becomes weaker and more inaccurate. For example, if the judgment accuracy of the discrimination network D is weakened to 50% and is almost the same as that of the random guess, the training is suspended to generate the network G, and then the training of the discrimination network D is started.
The process is repeated in a cycle of
Figure BDA0003910053440000071
The meaning of the function, namely the discrimination network D is to maximize the discrimination accuracy, and the generation network G is to minimize the discrimination accuracy of the discrimination network D, through the pair of resistance alternative training processes, the capabilities of the generation network G and the discrimination network D are improved, namely the pseudo sentence embedded vector distribution generated by the generation network G is closer and closer to the true sentence embedded vector distribution, and the accuracy of the discrimination network D for judging the authenticity is synchronously improved.
Finally, the generating capacity of the generating network G is improved to a set value, the generated pseudo sentence embedding vector can be almost in the false and true, the distinguishing network D cannot distinguish the true from the false, and the generating network G and the distinguishing network D are put into use.
When data is input, a generating network samples a corresponding Gaussian noise z in a hidden vector space according to input data x and maps the Gaussian noise z to a high-dimensional text representation feature G (z), and then distance is obtained through calculation of a judging network for judgment, wherein the calculation formula is as follows:
Figure BDA0003910053440000081
Figure BDA0003910053440000082
Figure BDA0003910053440000083
wherein the content of the first and second substances,
Figure BDA0003910053440000084
representing the residual of the real data and the generated data,
Figure BDA0003910053440000085
representing the difference of the two through the extraction result of the characteristics of the middle layer of the discriminator, f (x) representing the characteristics of the last layer of logic output by the discrimination network, lambda being used for weighting representation,
Figure BDA0003910053440000086
then is
Figure BDA0003910053440000087
And with
Figure BDA0003910053440000088
The weighted average of (a) is calculated,
Figure BDA0003910053440000089
and
Figure BDA00039100534400000810
the value range is 0-1. The calculation includes two parts: the first part is calculation of residual loss, and the corresponding Gaussian noise z is searched on a hidden vector space by minimizing the residual loss, so that a pseudo sentence embedding vector generated by the corresponding Gaussian noise z through a generated network, namely a high-dimensional text representation feature G (z), is as close as possible to input data x; the second part is the calculation of discriminant loss by minimizing the loss of intermediate layer featuresThe distribution of G (z) is made as much as possible on the learned true sentence space. The calculation is that after the generation network G and the discrimination network D are trained and put into use, new data are input for judging abnormal values of the newly input data. When new data is input, the determination-time setting threshold may be set to 0.1. The hidden vector space is a gaussian noise space with a set dimension, wherein the dimension can be 100 dimensions.
In this embodiment, f (x) represents that the discrimination network outputs the last layer of the logic feature, and the difference is calculated by the logic rather than the last output result because the logic feature is better than directly adopting the output effect.
In this embodiment, for each piece of input data x, it is necessary to determine whether it is abnormal, that is, whether the input data is consistent with the distribution of true sentences used in the training stage. Thus, we expect that the sentence-embedding vector representation features generated are close to the representation features of the true sentence-embedding vector by sampling a gaussian noise z from the hidden vector space and mapping it to high-dimensional text representation features G (z) by the generating network. A generation network G and a discrimination network D are fixed, gaussian noise z of a hidden vector space is updated by using a gradient back propagation method, and the difference between embedded vectors of the two sentences is reduced as much as possible. In this way, we implement search in the hidden vector space, so that gaussian noise z in the hidden vector space can represent input data x most, and hidden vector space sampling is implemented. When the method is used, training is carried out according to the real sentence data, a generation network G and a discrimination network D meeting the use accuracy are formed, and the keyword library is dynamically updated in the use process, so that the accuracy of information playing can be ensured and continuously improved through continuous use.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein, but is not intended to be foreclosed in other embodiments and is capable of use in various other combinations, modifications, and environments and is capable of changes within the scope of the invention as expressed in the above teachings or as known in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A method for distinguishing through a deep neural network model is characterized by comprising a generating network for generating sentence embedding vectors and a distinguishing network which is connected with the generating network and used for judging the authenticity of the input sentence embedding vectors, and the method comprises the following processing steps:
step 1: generating a true sentence embedding vector of a real use sample and a pseudo sentence embedding vector of a non-use sample by a generating network, then sequentially inputting the true sentence embedding vector and the pseudo sentence embedding vector into a judging network respectively for judgment, wherein in the judging network judging process, the parameters of the generating network are kept unchanged, and only the parameters of the judging network are adjusted, so that the judging accuracy of the judging network is continuously improved until a set value or convergence is reached;
step 2: through the objective function for continuously reducing the discrimination accuracy of the discrimination network, the degree that the sentence embedding vector generated by the generated network is close to the distribution of a real sample is continuously improved, in the process, the parameter of the discrimination network is fixed, and only the parameter of the generated network is adjusted through the objective function until a set value is reached or the convergence is reached;
and 3, step 3: alternately performing the step 1 and the step 2 to ensure that the accuracy of the generated network and the judgment network in the countermeasure is improved together until the pseudo sentence embedded vector and the real sentence embedded vector generated by the generated network are input into the judgment network, and when the judgment network cannot distinguish the real sentence embedded vector or the pseudo sentence embedded vector, the accuracy of the generated network is considered to reach a set value to meet the requirement of actual use, and at the moment, stopping alternately performing the step 1 and the step 2, and then generating a set number of sentence embedded vectors through the generated network and averaging to obtain the category center of the target type;
and 4, step 4: when data is input, the distance between a sentence embedding vector generated by the data and the center of the category is calculated, and if the distance exceeds a set threshold, the sentence embedding vector is judged to be a non-corresponding type, and if the distance is lower than the set threshold, the sentence embedding vector is judged to be a corresponding type.
2. According to the claimSolution 1. The method for discriminating by a deep neural network model is characterized in that, in step 1, the discriminating network calculates negative log-likelihoods of distributions of a true sentence embedding vector and a pseudo sentence embedding vector respectively, the calculation process is that each embedded vector, namely a true sentence embedded vector or a false sentence embedded vector, is marked as x = (x) 1 ,x 2 ,…,x n ) Wherein x is 1 ,x 2 ,…,x n Respectively representing each element of the vector, subscript n represents total n dimensions, the discrimination network makes input x pass through a plurality of hidden layers from an input layer in a forward propagation mode, and each layer adopts the weight parameters of the layer to carry out weighted summation on the input x
Figure FDA0003910053430000021
Wherein x is i The value of the ith dimension, also called the feature, w, representing the input vector of the layer i The representation corresponds to the feature x i K is an input dimension of the layer, k = n for the input layer, a = δ (z) is activated through an activation function δ to extract features layer by layer, finally, a discriminant probability is output through a softmax function in an output layer and is recorded as y, and a negative log-likelihood-log (y) is taken as a target and is recorded as J (w);
then, a weight parameter and a bias parameter of the discrimination network are set by using a gradient back propagation method, and the weight parameter and the bias parameter are adjusted towards the direction of reducing the negative log likelihood, so that the discrimination accuracy of the discrimination network reaches a set value.
3. The method of claim 2, wherein the formula set by the gradient back propagation method is w new =w old Eta. J (w), wherein w old Is a weight parameter before update, eta is a learning rate, J (w) is a gradient, and a minus sign represents that the updated weight parameter w is set in the direction in which the gradient descends new The parameters are iteratively updated by the method.
4. The method of claim 1, wherein in step 2, the objective function is calculated as follows,
Figure FDA0003910053430000022
wherein the content of the first and second substances,
Figure FDA0003910053430000023
representing the value function of the alternate training generation network and the discrimination network, D representing the discrimination network, and G representing the generation network; d (x) represents the discrimination output probability of the discrimination network on the input data x, and G (z) represents the mapping from Gaussian noise z sampled in a hidden vector space of the generation network to a pseudo-sentence embedding vector; x to P data (x) Representing the spatial distribution of x samples from the true sentence, z-P z (z) represents a gaussian distribution of z-samples from the hidden vector space; first item on right of formula
Figure FDA0003910053430000024
The second term on the right of the formula represents the expectation of the log likelihood of the true sentence judged as the true sentence by the judgment network
Figure FDA0003910053430000031
Gaussian noise sampled on a hidden vector space is mapped into a pseudo sentence embedding vector through a generating network, and is judged to be expectation of log likelihood of a pseudo sentence through a judging network.
5. The method of claim 1, wherein when data is input, the generating network samples a corresponding gaussian noise z in hidden vector space according to input data x and maps the gaussian noise z to a high-dimensional text representation feature G (z), and then performs the determination by obtaining a distance through the calculation of the determining network, wherein the calculation formula is as follows:
Figure FDA0003910053430000032
Figure FDA0003910053430000033
Figure FDA0003910053430000034
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003910053430000035
representing the residual of the real data and the generated data,
Figure FDA0003910053430000036
representing the difference of the two through the extraction result of the characteristics of the middle layer of the discriminator, f (x) representing the characteristics of the last layer of logic output by the discrimination network, lambda being used for weighting representation,
Figure FDA0003910053430000037
then is
Figure FDA0003910053430000038
And with
Figure FDA0003910053430000039
The weighted average of (a) is calculated,
Figure FDA00039100534300000310
and
Figure FDA00039100534300000311
the value range is 0-1.
6. The method according to claim 4 or 5, wherein the hidden vector space is a Gaussian noise space with a set dimension.
CN202211317661.8A 2022-10-26 2022-10-26 Method for distinguishing through deep neural network model Pending CN115660037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211317661.8A CN115660037A (en) 2022-10-26 2022-10-26 Method for distinguishing through deep neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211317661.8A CN115660037A (en) 2022-10-26 2022-10-26 Method for distinguishing through deep neural network model

Publications (1)

Publication Number Publication Date
CN115660037A true CN115660037A (en) 2023-01-31

Family

ID=84990574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211317661.8A Pending CN115660037A (en) 2022-10-26 2022-10-26 Method for distinguishing through deep neural network model

Country Status (1)

Country Link
CN (1) CN115660037A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897740A (en) * 2018-05-07 2018-11-27 内蒙古工业大学 A kind of illiteracy Chinese machine translation method based on confrontation neural network
CN110792563A (en) * 2019-11-04 2020-02-14 北京天泽智云科技有限公司 Wind turbine generator blade fault audio monitoring method based on convolution generation countermeasure network
CN112133291A (en) * 2019-06-05 2020-12-25 科大讯飞股份有限公司 Language identification model training, language identification method and related device
CN112507605A (en) * 2020-11-04 2021-03-16 清华大学 Power distribution network anomaly detection method based on AnoGAN
CN112528027A (en) * 2020-12-24 2021-03-19 北京百度网讯科技有限公司 Text classification method, device, equipment, storage medium and program product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897740A (en) * 2018-05-07 2018-11-27 内蒙古工业大学 A kind of illiteracy Chinese machine translation method based on confrontation neural network
CN112133291A (en) * 2019-06-05 2020-12-25 科大讯飞股份有限公司 Language identification model training, language identification method and related device
CN110792563A (en) * 2019-11-04 2020-02-14 北京天泽智云科技有限公司 Wind turbine generator blade fault audio monitoring method based on convolution generation countermeasure network
CN112507605A (en) * 2020-11-04 2021-03-16 清华大学 Power distribution network anomaly detection method based on AnoGAN
CN112528027A (en) * 2020-12-24 2021-03-19 北京百度网讯科技有限公司 Text classification method, device, equipment, storage medium and program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MD ABUL BASHAR ET AL.: "TAnoGAN: Time Series Anomaly Detection with Generative Adversarial Networks", 《HTTPS://ARXIV.ORG/ABS/2008.09567》, pages 1 - 9 *

Similar Documents

Publication Publication Date Title
CN108197525B (en) Face image generation method and device
CN111753881B (en) Concept sensitivity-based quantitative recognition defending method against attacks
CN113488073B (en) Fake voice detection method and device based on multi-feature fusion
CN114842267A (en) Image classification method and system based on label noise domain self-adaption
CN113949549B (en) Real-time traffic anomaly detection method for intrusion and attack defense
CN107832789B (en) Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation
KR102387886B1 (en) Method and apparatus for refining clean labeled data for artificial intelligence training
CN115131347B (en) Intelligent control method for processing zinc alloy parts
CN115511012B (en) Class soft label identification training method with maximum entropy constraint
CN111144462B (en) Unknown individual identification method and device for radar signals
CN110223342B (en) Space target size estimation method based on deep neural network
CN115617882A (en) Time sequence diagram data generation method and system with structural constraint based on GAN
CN110084301B (en) Hidden Markov model-based multi-working-condition process working condition identification method
CN114331731A (en) PCA and RF based block chain abnormity detection method and related device
CN112001480A (en) Small sample amplification method for sliding orientation data based on generation of countermeasure network
CN113343123B (en) Training method and detection method for generating confrontation multiple relation graph network
CN114419379A (en) System and method for improving fairness of deep learning model based on antagonistic disturbance
CN115660037A (en) Method for distinguishing through deep neural network model
CN110827809A (en) Language identification and classification method based on condition generation type confrontation network
CN110706712A (en) Recording playback detection method in home environment
Henmi et al. Interactive evolutionary computation with evaluation characteristics of Multi-IEC users
CN113420870B (en) U-Net structure generation countermeasure network and method for underwater sound target recognition
CN115687568A (en) Method for carrying out safety protection on variable information board content
CN114139937A (en) Indoor thermal comfort data generation method, system, equipment and medium
CN113392901A (en) Confrontation sample detection method based on deep learning model neural pathway activation characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination