CN112132181B - Image true and false identification method based on generation type countermeasure network - Google Patents

Image true and false identification method based on generation type countermeasure network Download PDF

Info

Publication number
CN112132181B
CN112132181B CN202010843673.9A CN202010843673A CN112132181B CN 112132181 B CN112132181 B CN 112132181B CN 202010843673 A CN202010843673 A CN 202010843673A CN 112132181 B CN112132181 B CN 112132181B
Authority
CN
China
Prior art keywords
image
network
layer
model
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010843673.9A
Other languages
Chinese (zh)
Other versions
CN112132181A (en
Inventor
马宏斌
马艳龙
王英丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang University
Original Assignee
Heilongjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang University filed Critical Heilongjiang University
Priority to CN202010843673.9A priority Critical patent/CN112132181B/en
Publication of CN112132181A publication Critical patent/CN112132181A/en
Application granted granted Critical
Publication of CN112132181B publication Critical patent/CN112132181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud

Abstract

An image true and false identification method based on a generated type countermeasure network. The image classification network based on deep learning has certain limitation on low resolution and unclear image recognition accuracy, and the recognition accuracy of the blurred image is improved only by increasing the network layer number. The method comprises the following specific steps: the image is processed and identified by a generation type countermeasure network, the generation type countermeasure network comprises a generation model and a classification model, generation data is obtained through the generation model, and authenticity is judged through the classification model. The invention is used for identifying the authenticity of the image.

Description

Image true and false identification method based on generation type countermeasure network
Technical Field
The invention relates to an image authenticity identification method based on a generated type countermeasure network.
Background
With the continuous development of artificial intelligence, image recognition methods based on deep learning are widely used, and thus, many image classification networks based on deep learning are generated, such as: alexNet, VGG, googLeNet, resNet the number of layers is 8, 19, 22 and 152, and on the clear image data set, the recognition error rates are 16.4%, 7.33%, 6.66% and 4.92%, and as the number of network layers increases, the image classification recognition accuracy is also improved, but for the image data set with low resolution and definition, the recognition error rate of AlexNet is 44.01%, the recognition error rate of GoogLeNet is up to 44.61%, and the image classification network based on deep learning has certain limitation on low resolution and unclear image recognition accuracy, and the improvement of the recognition accuracy of the blurred image only by increasing the network layer number method is limited.
Disclosure of Invention
In order to solve the above-mentioned shortcomings of the prior art, an object of the present invention is to provide an image authentication method based on a generated type countermeasure network, so as to overcome the shortcomings in the prior art.
In order to achieve the above object, the present invention provides an image authenticity identifying method based on a generative countermeasure network, wherein the image authenticity identifying method based on the generative countermeasure network processes and identifies an image by using the generative countermeasure network, the generative countermeasure network comprises a generative model and a classification model, the generative data is obtained through the generative model, and the authenticity is identified through the classification model;
the mathematical description is as follows:
Figure GDA0004041443970000011
wherein: v (D, G) is a loss function of GAN to generate an impedance network;
P r and P z Respectively distributing real data and random noise;
sampling the real data by x;
e is a mathematical expectation;
d (x) represents the output of the data after passing through the discrimination model;
the method comprises the following steps:
(1) Building a generating model;
the generating model adopts a sub-pixel up-sampling layer, the generating model comprises four residual blocks, the residual blocks adopt a jump connection structure, and one residual block is generated every jump layer;
(2) Constructing a classification model;
the classifier is a nine-layer convolution network and comprises a C full-connection layer and a D convolution layer, image samples of m multiplied by n multiplied by p are subjected to feature extraction through seven layers of convolution layers, feature information extracted by the convolution layers is integrated through the two layers of full-connection layers, a k+1-dimensional classification result is finally output, the confidence level of the class corresponding to the first k-dimensional output dimension is determined as false confidence level, and the k+1-th dimension is determined as false confidence level;
(3) Building a training model;
the training model is a VGG16 network, and the VGG16 network is a 16-layer VGG network which is bent through training of 64 x 64 pixels after resolution adjustment;
(4) Outputting and identifying images;
and generating an image with 32 x 32 pixels of model input, inputting the image output and a corresponding clear image with 64 x 64 pixels into the VGG16 network, keeping the weight of the VGG16 unchanged in the training process, replacing the VGG16 network with a discrimination model after iterating for Y times, and identifying the authenticity of the image through the discrimination model.
In the image authenticity identification method based on the generated type countermeasure network, the weight of the VGG16 is kept unchanged in the training process, the number of times of replacing the VGG16 network with Y in the discrimination model after iterating Y times is 2000, 20000, 50000 and 80000.
The generation model adopts a sub-pixel up-sampling layer, the generation model comprises four residual blocks, the residual blocks adopt a jump connection structure, one residual block is generated at each jump layer, the sub-pixel up-sampling layer takes the output of the previous convolution layer as input I to obtain O, and the formula is shown as follows:
O=f L (I)=PS(W L ×f L-1 (I)+b L )
wherein: o is the output result, I is the input image, f is the activation function f L (I) Is the activation processing of the L-th round of the I data; w (W) L Is the weighting process of the L rounds; f (f) L-1 (I) Is the activation processing of the L-1 round of the I data; b L Is the residual processing of the L rounds;
PS is a cyclic shift operation of different rounds, in order to rearrange the output tensors of r2 convolution layers into new tensors, and height and Width are the height and Width set by the image I;
wherein r is 2 For magnification, the image resolution of the output 32×32 pixels is increased to 64×64 pixels.
The image authenticity identification method based on the generation type countermeasure network is characterized in that the classifier is a nine-layer convolution network and comprises a C full-connection layer and a D convolution layer, wherein the number of the C full-connection layer is two, and the number of the D convolution layer is seven.
The invention has the beneficial effects that:
1. the invention improves the recognition precision of the unclear image by generating the model to repair and amplify the image with low resolution and unclear, the generating model adopts a jump connection structure, and each jump generates a incomplete block, thereby effectively reducing the number of parameters in the network, being beneficial to the back propagation of gradients and accelerating the convergence speed of the network.
2. The invention achieves high recognition rate and effectively solves the problem of poor recognition effect under the condition of less number of effective signature samples.
Drawings
Fig. 1 is a schematic diagram of the structure of the GAN of the present invention;
FIG. 2 is a schematic diagram of the structure of the generative model of the present invention;
fig. 3 is a schematic structural diagram of a residual block;
FIG. 4 is a schematic diagram of a classifier;
fig. 5 is a schematic diagram of the structure of the resulting sample.
Detailed Description
For a further understanding of the structure, features, and other objects of the invention, reference should now be made in detail to the accompanying drawings of the preferred embodiments of the invention, which are illustrated in the accompanying drawings and are for purposes of illustrating the concepts of the invention and not for limiting the invention.
In a first embodiment, the image authenticity identification method based on the generation type countermeasure network processes and identifies the image by using the generation type countermeasure network, wherein the generation type countermeasure network comprises a generation model and a classification model, the generation model obtains generation data, and the classification model judges authenticity;
the mathematical description is as follows:
Figure GDA0004041443970000031
Figure GDA0004041443970000032
wherein: v (D, G) is a loss function of GAN to generate an impedance network;
P r and P z Respectively distributing real data and random noise;
sampling the real data by x;
e is a mathematical expectation;
d (x) represents the output of the data after passing through the discrimination model;
the method comprises the following steps:
(1) Building a generating model;
the generation model adopts a sub-pixel up-sampling layer, the generation model comprises four residual blocks, the residual blocks adopt a jump connection structure, one residual block is generated every jump layer, up-sampling can be used for image amplification, and an interpolation method is adopted, namely new elements are inserted between pixel points by adopting a proper interpolation algorithm on the basis of original image pixels. The pixel interpolation result has errors necessarily, and in order to adjust the result, a residual block is set, wherein the residual block is used for calculating the residual value between the predicted value and the observed value, and the result of phase accuracy is obtained through weighting operation of the direct fitting result. And carrying out residual processing for a plurality of rounds to form a residual network. The training of the neural network layer with excessive layers is difficult, and the gradient obtained by the chain derivation rule is easy to generate gradient disappearance or gradient explosion. In order to avoid gradient disappearance, the residual network adds original data in each layer of convolution stack, and also adds jump connection, acquires activation from a certain layer and feeds back to another layer rapidly, and can jump to a deeper layer of the network, so that the deep layer network also has the characteristic that the shallow layer network is not easy to degrade.
(2) Constructing a classification model;
the classifier is a nine-layer convolution network and comprises a C full-connection layer and a D convolution layer, image samples of m multiplied by n multiplied by p are subjected to feature extraction through seven layers of convolution layers, feature information extracted by the convolution layers is integrated through the two layers of full-connection layers, a k+1-dimensional classification result is finally output, the confidence level of the class corresponding to the first k-dimensional output dimension is determined as false confidence level, and the k+1-th dimension is determined as false confidence level;
wherein: m x n represents image resolution;
p represents the number of image channels;
(3) Building a training model;
the training model is a VGG16 network, and the VGG16 network is a 16-layer VGG network which is bent through training of 64 x 64 pixels after resolution adjustment;
(4) Outputting and identifying images;
and generating an image with 32 x 32 pixels of model input, inputting the image output and a corresponding clear image with 64 x 64 pixels into the VGG16 network, keeping the weight of the VGG16 unchanged in the training process, replacing the VGG16 network with a discrimination model after iterating for Y times, and identifying the authenticity of the image through the discrimination model.
In the second embodiment, the present embodiment is further described with respect to the image authenticity identifying method based on the generated type challenge network in the first embodiment, in the training process, the number of times of replacing the VGG16 network with Y in the discrimination model after iterating Y times is 2000, 20000, 50000, 80000, while keeping the weight of the VGG16 unchanged from the weight of the new generation model.
The third embodiment is a further explanation of the image authenticity identification method based on the generation type countermeasure network in the first embodiment, the generation model adopts a sub-pixel upsampling layer, the generation model includes four residual blocks, the residual blocks adopt a jump connection structure, one residual block is generated in each jump layer, the sub-pixel upsampling layer obtains O by using the output of the previous convolution layer as input I, as shown in the formula:
O=f L (I)=PS(W L ×f L-1 (I)+b L )
wherein: o is the output result, I is the input image, f is the activation function f L (I) Is the activation processing of the L-th round of the I data; w (W) L Is the weighting process of the L rounds; f (f) L-1 (I) Is the activation processing of the L-1 round of the I data; b L Is the residual processing of the L rounds;
PS is a cyclic shift operation of different rounds, in order to rearrange the output tensors of r2 convolution layers into new tensors, and height and Width are the height and Width set by the image I;
wherein r is 2 For magnification, the image resolution of the output 32×32 pixels is increased to 64×64 pixels.
The fourth embodiment is a further description of the image authenticity identification method based on the generated type countermeasure network in the first embodiment, wherein the classifier is a nine-layer convolution network, and includes a C full-connection layer and a D convolution layer, wherein C is two, and D is seven.
It should be noted that the foregoing summary and the detailed description are intended to demonstrate practical applications of the technical solution provided by the present invention, and should not be construed as limiting the scope of the present invention. Various modifications, equivalent alterations, or improvements will occur to those skilled in the art, and are within the spirit and principles of the invention. The scope of the invention is defined by the appended claims.

Claims (4)

1. The image authenticity identification method based on the generation type countermeasure network is characterized in that the generation type countermeasure network is used for processing and identifying the image, the generation type countermeasure network comprises a generation model and a classification model, generation data are obtained through the generation model, and authenticity is judged through the classification model;
the mathematical description is as follows:
Figure QLYQS_1
wherein: v (D, G) is a loss function of GAN to generate an impedance network;
P r and P z Respectively distributing real data and random noise;
sampling the real data by x;
e is a mathematical expectation;
d (x) represents the output of the data after passing through the discrimination model;
the method comprises the following steps:
(1) Building a generating model;
the generating model adopts a sub-pixel up-sampling layer, the generating model comprises four residual blocks, the residual blocks adopt a jump connection structure, and one residual block is generated every jump layer; the up-sampling can be used for image amplification, and an interpolation method is adopted, namely, new elements are inserted between pixel points by adopting a proper interpolation algorithm on the basis of original image pixels; the pixel interpolation result has errors necessarily, and in order to adjust the result, a residual block is set, wherein the residual block is used for calculating the residual value between the predicted value and the observed value, and the residual value is subjected to weighting operation with the direct fitting result to obtain a phase accuracy result; performing residual processing for multiple rounds to form a residual network; the training of the neural network with excessive layers is difficult, and the gradient obtained by the chain derivation rule is easy to generate gradient disappearance or gradient explosion; in order to avoid gradient disappearance, the residual network adds original data in each layer of convolution stack, and also adds jump connection, acquires activation from a certain layer and feeds back to another layer rapidly, and can jump to a deeper layer of the network, so that the deep layer network also has the characteristic that the shallow layer network is not easy to degrade;
(2) Constructing a classification model;
the classifier is a nine-layer convolution network and comprises a C full-connection layer and a D convolution layer, image samples of m multiplied by n multiplied by p are subjected to feature extraction through seven layers of convolution layers, feature information extracted by the convolution layers is integrated through the two layers of full-connection layers, a k+1-dimensional classification result is finally output, the confidence level of the class corresponding to the first k-dimensional output dimension is determined as false confidence level, and the k+1-th dimension is determined as false confidence level;
wherein: m x n represents image resolution;
p represents the number of image channels;
(3) Building a training model;
the training model is a VGG16 network, and the VGG16 network is a 16-layer VGG network which is bent through training of 64 x 64 pixels after resolution adjustment;
(4) Outputting and identifying images;
and generating an image with 32 x 32 pixels of model input, inputting the image output and a corresponding clear image with 64 x 64 pixels into the VGG16 network, keeping the weight of the VGG16 unchanged in the training process, replacing the VGG16 network with a discrimination model after iterating for Y times, and identifying the authenticity of the image through the discrimination model.
2. The method for identifying true and false images based on a generated type countermeasure network according to claim 1, wherein the training process keeps the weight of the VGG16 unchanged from the weight of the newly generated model, and the number of times of replacing the VGG16 network with Y in the discrimination model after iterating Y times is 2000, 20000, 50000 and 80000.
3. The image authenticity identifying method based on the generation type countermeasure network according to claim 2, wherein the generation model adopts a sub-pixel up-sampling layer, the generation model comprises four residual blocks, the residual blocks adopt a jump connection structure, each jump layer generates one residual block, the sub-pixel up-sampling layer takes the output of the previous convolution layer as input I to obtain O, as shown in the formula:
O=f L (I)=PS(W L ×f L-1 (I)+b L )
wherein: o is the output result, I is the input image, f is the activation function f L (I) Is the activation processing of the L-th round of the I data;
W L is the weighting process of the L rounds; f (f) L-1 (I) Is the activation processing of the L-1 round of the I data; b L Is the residual processing of the L rounds;
PS is a different round period shift operation, with the aim of shifting r 2 The output tensors of the convolution layers are rearranged into new tensors, and Heigh and Width are the height and Width set by the image I;
wherein r is 2 For magnification, the image resolution of the output 32×32 pixels is increased to 64×64 pixels.
4. The method for identifying true and false images based on a generated type countermeasure network according to claim 3, wherein the classifier is a nine-layer convolution network and comprises a C full-connection layer and a D convolution layer, wherein C is two, and D is seven.
CN202010843673.9A 2020-08-20 2020-08-20 Image true and false identification method based on generation type countermeasure network Active CN112132181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010843673.9A CN112132181B (en) 2020-08-20 2020-08-20 Image true and false identification method based on generation type countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010843673.9A CN112132181B (en) 2020-08-20 2020-08-20 Image true and false identification method based on generation type countermeasure network

Publications (2)

Publication Number Publication Date
CN112132181A CN112132181A (en) 2020-12-25
CN112132181B true CN112132181B (en) 2023-05-05

Family

ID=73851398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010843673.9A Active CN112132181B (en) 2020-08-20 2020-08-20 Image true and false identification method based on generation type countermeasure network

Country Status (1)

Country Link
CN (1) CN112132181B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN111461134A (en) * 2020-05-18 2020-07-28 南京大学 Low-resolution license plate recognition method based on generation countermeasure network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102184755B1 (en) * 2018-05-31 2020-11-30 서울대학교 산학협력단 Apparatus and Method for Training Super Resolution Deep Neural Network
CN109801221A (en) * 2019-01-18 2019-05-24 腾讯科技(深圳)有限公司 Generate training method, image processing method, device and the storage medium of confrontation network
CN110097543B (en) * 2019-04-25 2023-01-13 东北大学 Hot-rolled strip steel surface defect detection method based on generation type countermeasure network
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN110516575A (en) * 2019-08-19 2019-11-29 上海交通大学 GAN based on residual error domain richness model generates picture detection method and system
CN110689086B (en) * 2019-10-08 2020-09-25 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111508508A (en) * 2020-04-15 2020-08-07 腾讯音乐娱乐科技(深圳)有限公司 Super-resolution audio generation method and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN111461134A (en) * 2020-05-18 2020-07-28 南京大学 Low-resolution license plate recognition method based on generation countermeasure network

Also Published As

Publication number Publication date
CN112132181A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN110136063B (en) Single image super-resolution reconstruction method based on condition generation countermeasure network
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
CN110378844B (en) Image blind motion blur removing method based on cyclic multi-scale generation countermeasure network
CN109978762B (en) Super-resolution reconstruction method based on condition generation countermeasure network
CN106991646B (en) Image super-resolution method based on dense connection network
CN111080513B (en) Attention mechanism-based human face image super-resolution method
CN111951153B (en) Face attribute refined editing method based on generation of countering network hidden space deconstructment
CN109949200B (en) Filter subset selection and CNN-based steganalysis framework construction method
CN111967573A (en) Data processing method, device, equipment and computer readable storage medium
CN111553296B (en) Two-value neural network stereo vision matching method based on FPGA
CN111062432B (en) Semantically multi-modal image generation method
CN111783862A (en) Three-dimensional significant object detection technology of multi-attention-directed neural network
Qin et al. Gradually enhanced adversarial perturbations on color pixel vectors for image steganography
CN112396554B (en) Image super-resolution method based on generation of countermeasure network
CN111291810A (en) Information processing model generation method based on target attribute decoupling and related equipment
CN114283058A (en) Image super-resolution reconstruction method based on countermeasure network and maximum mutual information optimization
CN112132181B (en) Image true and false identification method based on generation type countermeasure network
CN115860113B (en) Training method and related device for self-countermeasure neural network model
CN116229323A (en) Human body behavior recognition method based on improved depth residual error network
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
CN115760603A (en) Interference array broadband imaging method based on big data technology
CN115294182A (en) High-precision stereo matching method based on double-cross attention mechanism
CN109146886B (en) RGBD image semantic segmentation optimization method based on depth density
CN111882563B (en) Semantic segmentation method based on directional full convolution network
CN111932456A (en) Single image super-resolution reconstruction method based on generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Ma Hongbin

Inventor after: Ma Yanlong

Inventor after: Wang Yingli

Inventor before: Ma Yanlong

Inventor before: Ma Hongbin

Inventor before: Wang Yingli

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant