CN112132181A - Image authenticity identification method based on generation type countermeasure network - Google Patents

Image authenticity identification method based on generation type countermeasure network Download PDF

Info

Publication number
CN112132181A
CN112132181A CN202010843673.9A CN202010843673A CN112132181A CN 112132181 A CN112132181 A CN 112132181A CN 202010843673 A CN202010843673 A CN 202010843673A CN 112132181 A CN112132181 A CN 112132181A
Authority
CN
China
Prior art keywords
model
image
network
layer
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010843673.9A
Other languages
Chinese (zh)
Other versions
CN112132181B (en
Inventor
马艳龙
马宏斌
王英丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang University
Original Assignee
Heilongjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang University filed Critical Heilongjiang University
Priority to CN202010843673.9A priority Critical patent/CN112132181B/en
Publication of CN112132181A publication Critical patent/CN112132181A/en
Application granted granted Critical
Publication of CN112132181B publication Critical patent/CN112132181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud

Abstract

An image authenticity identification method based on a generative countermeasure network. The image classification network based on deep learning has a certain limitation on the low-resolution and unclear image identification accuracy, and the fuzzy image identification accuracy is improved only by increasing the number of network layers. The method comprises the following specific steps: and processing and identifying the image by using a generative confrontation network, wherein the generative confrontation network comprises a generative model and a classification model, the generative model is used for obtaining generative data, and the classification model is used for judging authenticity. The invention is used for identifying the authenticity of the image.

Description

Image authenticity identification method based on generation type countermeasure network
Technical Field
The invention relates to an image authenticity identification method based on a generating type countermeasure network.
Background
With the continuous development of artificial intelligence, the image recognition method based on deep learning is widely applied, so that many image classification networks based on deep learning are generated, such as: the number of layers of AlexNet, VGG, GoogLeNet and ResNet is 8, 19, 22 and 152 respectively, the recognition error rates of the AlexNet, the VGG, the GoogLeNet and the ResNet are 16.4, 7.33, 6.66 and 4.92 respectively on a clear image data set, the image classification recognition accuracy is improved along with the increase of the number of the network layers, but for an image data set with low resolution and definition, the recognition error rate of AlexNet is 44.01 percent, the recognition error rate of GoogLeNet is higher and reaches 44.61 percent, and the image classification network based on deep learning is lower in low-resolution and unclear image recognition accuracy, and the improvement of the recognition accuracy of a blurred image only by the method of increasing the number of the network layers has certain limitation.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide an image authenticity identification method based on a generation type countermeasure network, so as to overcome the defects in the prior art.
In order to achieve the above object, the present invention provides an image authenticity identification method based on a generative confrontation network, wherein the image authenticity identification method based on the generative confrontation network processes and identifies an image by using the generative confrontation network, the generative confrontation network comprises a generative model and a classification model, the generative model obtains the generative data, and the classification model identifies authenticity;
the mathematics are described as follows:
Figure BDA0002642316340000011
wherein: v (D, G) is a loss function to GAN of the countermeasure network;
Prand PzRespectively true data distribution and random noise distribution;
x is sampled from the real data;
e is a mathematical expected value;
d (x) represents the output of the data after passing through the discriminant model;
the method comprises the following steps:
(1) constructing a generation model;
the generation model adopts a sub-pixel up-sampling layer, the generation model comprises four residual blocks, the residual blocks adopt a jump connection structure, and each jump layer generates one residual block;
(2) constructing a classification model;
the classifier is a nine-layer convolutional network and comprises a C fully-connected layer and a D convolutional layer, wherein an mxnxp image sample is subjected to feature extraction through seven convolutional layers, feature information extracted by the convolutional layers is integrated by utilizing two fully-connected layers, and finally a k + 1-dimensional classification result is output, the front k-dimensional output dimension corresponds to the confidence coefficient of the class, and the k + 1-dimensional output dimension is the confidence coefficient judged to be false;
(3) constructing a training model;
the training model is a VGG16 network, and the VGG16 network is a 16-layer VGG network which is already trained and bent by 64 × 64 pixels after the resolution is adjusted;
(4) image output and recognition;
generating a model input 32 x 32 pixel image, inputting the output of the model and a corresponding 64 x 64 clear image into a VGG16 network, keeping the weight of VGG16 unchanged in the training process, updating the weight of the generated model, replacing the VGG16 network with a discriminant model after iterating for Y times, and identifying the authenticity of the image through the discriminant model.
In the image authenticity identification method based on the generative confrontation network, the weight of the VGG16 is kept unchanged in the training process, the weight of the generative model is updated, and after Y times of iteration, the VGG16 network is replaced by Y in the discriminant model, wherein the times of Y in the discriminant model are 2000, 20000, 50000 and 80000.
The image authenticity identification method based on the generative confrontation network is characterized in that the generative model adopts a sub-pixel up-sampling layer, the generative model comprises four residual blocks, the residual blocks adopt a jump connection structure, each jump layer generates a residual block, the sub-pixel up-sampling layer outputs the previous convolution layer as an input I to obtain O, and the formula is as follows:
O=fL(I)=PS(WL×fL-1(I)+bL)
in the formula: PS is a period shift operation, aiming to shift r2Rearranging the output tensors of the convolution layers into new tensors;
H. w is the height and width of the image;
wherein r is2For magnification, the output image resolution of 32 × 32 pixels is increased to 64 × 64 pixels.
The image authenticity identification method based on the generation type countermeasure network is characterized in that the classifier is a nine-layer convolution network and comprises two C full-connection layers and seven D convolution layers.
The invention has the beneficial effects that:
1. the invention repairs and amplifies the low-resolution and unclear image by the generated model to improve the identification precision of the unclear image, the generated model adopts a jump connection structure, and each jump layer generates a defective block, thereby effectively reducing the number of parameters in the network, being beneficial to the back propagation of the gradient and accelerating the convergence speed of the network.
2. The invention achieves high recognition rate and effectively solves the problem of poor recognition effect under the condition of less effective sample number.
Drawings
FIG. 1 is a schematic diagram of the structure of a GAN of the present invention;
FIG. 2 is a schematic diagram of the structure of the generative model of the present invention;
FIG. 3 is a diagram illustrating a structure of a residual block;
FIG. 4 is a schematic diagram of a classifier;
fig. 5 is a schematic diagram of a structure of a generated sample.
Detailed Description
To further understand the structure, characteristics and other objects of the present invention, the following detailed description is given with reference to the accompanying preferred embodiments, which are only used to illustrate the technical solutions of the present invention and are not to limit the present invention.
The image authenticity identification method based on the generative confrontation network comprises the steps of processing and identifying an image by using the generative confrontation network, wherein the generative confrontation network comprises a generative model and a classification model, the generative model is used for obtaining generative data, and authenticity is judged through the classification model;
the mathematics are described as follows:
Figure BDA0002642316340000031
wherein: v (D, G) is a loss function to GAN of the countermeasure network;
Prand PzRespectively true data distribution and random noise distribution;
x is sampled from the real data;
e is a mathematical expected value;
d (x) represents the output of the data after passing through the discriminant model;
the method comprises the following steps:
(1) constructing a generation model;
the generation model adopts a sub-pixel up-sampling layer, the generation model comprises four residual blocks, the residual blocks adopt a jump connection structure, and each jump layer generates one residual block;
(2) constructing a classification model;
the classifier is a nine-layer convolutional network and comprises a C fully-connected layer and a D convolutional layer, wherein an mxnxp image sample is subjected to feature extraction through seven convolutional layers, feature information extracted by the convolutional layers is integrated by utilizing two fully-connected layers, and finally a k + 1-dimensional classification result is output, the front k-dimensional output dimension corresponds to the confidence coefficient of the class, and the k + 1-dimensional output dimension is the confidence coefficient judged to be false;
wherein: m x n represents the image resolution;
p represents the number of image channels;
(3) constructing a training model;
the training model is a VGG16 network, and the VGG16 network is a 16-layer VGG network which is already trained and bent by 64 × 64 pixels after the resolution is adjusted;
(4) image output and recognition;
generating a model input 32 x 32 pixel image, inputting the output of the model and a corresponding 64 x 64 clear image into a VGG16 network, keeping the weight of VGG16 unchanged in the training process, updating the weight of the generated model, replacing the VGG16 network with a discriminant model after iterating for Y times, and identifying the authenticity of the image through the discriminant model.
In a second embodiment, the present embodiment is further directed to the method for identifying image authenticity based on a generative confrontation network according to the first embodiment, wherein the weights of the generative model are updated while the weights of the VGG16 are kept unchanged during the training process, and the number of times that the VGG16 network is replaced with Y in the discriminant model after Y iterations is 2000, 20000, 50000, and 80000.
In a third embodiment, the present embodiment is a further description of the image authenticity identification method based on the generative countermeasure network described in the first embodiment, where the generative model adopts a sub-pixel upsampling layer, the generative model includes four residual blocks, the residual blocks adopt a jump connection structure, each jump layer generates one residual block, and the sub-pixel upsampling layer outputs a previous convolution layer as an input I to obtain O, as shown in the following formula:
O=fL(I)=PS(WL×fL-1(I)+bL)
in the formula: PS is a period shift operation, aiming to shift r2Rearranging the output tensors of the convolution layers into new tensors;
H. w is the height and width of the image;
wherein r is2For magnification, the output image resolution of 32 × 32 pixels is increased to 64 × 64 pixels.
In a fourth embodiment, the present embodiment is a further description of the image authenticity identification method based on the generative countermeasure network described in the first embodiment, wherein the classifier is a nine-layer convolutional network, and includes a C fully-connected layer and a D convolutional layer, where C is two and D is seven.
It should be noted that the above summary and the detailed description are intended to demonstrate the practical application of the technical solutions provided by the present invention, and should not be construed as limiting the scope of the present invention. Various modifications, equivalent substitutions, or improvements may be made by those skilled in the art within the spirit and principles of the invention. The scope of the invention is to be determined by the appended claims.

Claims (4)

1. An image authenticity identification method based on a generating type countermeasure network is characterized in that the image authenticity identification method based on the generating type countermeasure network is used for processing and identifying an image, the generating type countermeasure network comprises a generating model and a classification model, generating data is obtained through the generating model, and authenticity is judged through the classification model;
the mathematics are described as follows:
Figure FDA0002642316330000011
wherein: v (D, G) is a loss function to GAN of the countermeasure network;
Prand PzRespectively true data distribution and random noise distribution;
x is sampled from the real data;
e is a mathematical expected value;
d (x) represents the output of the data after passing through the discriminant model;
the method comprises the following steps:
(1) constructing a generation model;
the generation model adopts a sub-pixel up-sampling layer, the generation model comprises four residual blocks, the residual blocks adopt a jump connection structure, and each jump layer generates one residual block;
(2) constructing a classification model;
the classifier is a nine-layer convolutional network and comprises a C fully-connected layer and a D convolutional layer, wherein an mxnxp image sample is subjected to feature extraction through seven convolutional layers, feature information extracted by the convolutional layers is integrated by utilizing two fully-connected layers, and finally a k + 1-dimensional classification result is output, the front k-dimensional output dimension corresponds to the confidence coefficient of the class, and the k + 1-dimensional output dimension is the confidence coefficient judged to be false;
wherein: m x n represents the image resolution;
p represents the number of image channels;
(3) constructing a training model;
the training model is a VGG16 network, and the VGG16 network is a 16-layer VGG network which is already trained and bent by 64 × 64 pixels after the resolution is adjusted;
(4) image output and recognition;
generating a model input 32 x 32 pixel image, inputting the output of the model and a corresponding 64 x 64 clear image into a VGG16 network, keeping the weight of VGG16 unchanged in the training process, updating the weight of the generated model, replacing the VGG16 network with a discriminant model after iterating for Y times, and identifying the authenticity of the image through the discriminant model.
2. The method for image authenticity identification based on the generative confrontation network as claimed in claim 1, wherein the weight of the generated model is updated while keeping the weight of VGG16 unchanged in the training process, and the number of times of replacing the VGG16 network with Y in the discriminant model after iterating for Y times is 2000, 20000, 50000, 80000.
3. The method as claimed in claim 2, wherein the generated model adopts a sub-pixel upsampling layer, the generated model includes four residual blocks, the residual blocks adopt a skip connection structure, each skip layer generates one residual block, and the sub-pixel upsampling layer takes the previous convolution layer output as an input I to obtain O, as shown in the following formula:
O=fL(I)=PS(WL×fL-1(I)+bL)
in the formula: PS is a period shift operation, aiming to shift r2Rearranging the output tensors of the convolution layers into new tensors;
H. w is the height and width of the image;
wherein r is2For magnification, the output image resolution of 32 × 32 pixels is increased to 64 × 64 pixels.
4. The method as claimed in claim 3, wherein the classifier is a nine-layer convolutional network including two fully connected layers C and seven convolutional layers D.
CN202010843673.9A 2020-08-20 2020-08-20 Image true and false identification method based on generation type countermeasure network Active CN112132181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010843673.9A CN112132181B (en) 2020-08-20 2020-08-20 Image true and false identification method based on generation type countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010843673.9A CN112132181B (en) 2020-08-20 2020-08-20 Image true and false identification method based on generation type countermeasure network

Publications (2)

Publication Number Publication Date
CN112132181A true CN112132181A (en) 2020-12-25
CN112132181B CN112132181B (en) 2023-05-05

Family

ID=73851398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010843673.9A Active CN112132181B (en) 2020-08-20 2020-08-20 Image true and false identification method based on generation type countermeasure network

Country Status (1)

Country Link
CN (1) CN112132181B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801221A (en) * 2019-01-18 2019-05-24 腾讯科技(深圳)有限公司 Generate training method, image processing method, device and the storage medium of confrontation network
CN110097543A (en) * 2019-04-25 2019-08-06 东北大学 Surfaces of Hot Rolled Strip defect inspection method based on production confrontation network
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN110516575A (en) * 2019-08-19 2019-11-29 上海交通大学 GAN based on residual error domain richness model generates picture detection method and system
US20190370608A1 (en) * 2018-05-31 2019-12-05 Seoul National University R&Db Foundation Apparatus and method for training facial locality super resolution deep neural network
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111461134A (en) * 2020-05-18 2020-07-28 南京大学 Low-resolution license plate recognition method based on generation countermeasure network
CN111508508A (en) * 2020-04-15 2020-08-07 腾讯音乐娱乐科技(深圳)有限公司 Super-resolution audio generation method and equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190370608A1 (en) * 2018-05-31 2019-12-05 Seoul National University R&Db Foundation Apparatus and method for training facial locality super resolution deep neural network
CN109801221A (en) * 2019-01-18 2019-05-24 腾讯科技(深圳)有限公司 Generate training method, image processing method, device and the storage medium of confrontation network
CN110097543A (en) * 2019-04-25 2019-08-06 东北大学 Surfaces of Hot Rolled Strip defect inspection method based on production confrontation network
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN110516575A (en) * 2019-08-19 2019-11-29 上海交通大学 GAN based on residual error domain richness model generates picture detection method and system
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
CN111508508A (en) * 2020-04-15 2020-08-07 腾讯音乐娱乐科技(深圳)有限公司 Super-resolution audio generation method and equipment
CN111461134A (en) * 2020-05-18 2020-07-28 南京大学 Low-resolution license plate recognition method based on generation countermeasure network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
夏蕊; 马宏斌: "生成式对抗网络的通信网络安全技术", 《移动通信》 *

Also Published As

Publication number Publication date
CN112132181B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
CN111754403B (en) Image super-resolution reconstruction method based on residual learning
CN111583109B (en) Image super-resolution method based on generation of countermeasure network
CN109919204B (en) Noise image-oriented deep learning clustering method
CN110349103A (en) It is a kind of based on deep neural network and jump connection without clean label image denoising method
CN111524135A (en) Image enhancement-based method and system for detecting defects of small hardware fittings of power transmission line
CN111833352B (en) Image segmentation method for improving U-net network based on octave convolution
CN109635763B (en) Crowd density estimation method
CN110276389B (en) Mine mobile inspection image reconstruction method based on edge correction
CN111861886B (en) Image super-resolution reconstruction method based on multi-scale feedback network
CN111861906A (en) Pavement crack image virtual augmentation model establishment and image virtual augmentation method
CN111242241A (en) Method for amplifying etched character recognition network training sample
CN111986126A (en) Multi-target detection method based on improved VGG16 network
CN111882476B (en) Image steganography method for automatic learning embedding cost based on deep reinforcement learning
CN116563682A (en) Attention scheme and strip convolution semantic line detection method based on depth Hough network
CN112132181A (en) Image authenticity identification method based on generation type countermeasure network
CN116823659A (en) Low-light level image enhancement method based on depth feature extraction
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
CN115331073A (en) Image self-supervision learning method based on TransUnnet architecture
CN112990336B (en) Deep three-dimensional point cloud classification network construction method based on competitive attention fusion
CN115760603A (en) Interference array broadband imaging method based on big data technology
CN111260552B (en) Progressive learning-based image super-resolution method
CN111210439B (en) Semantic segmentation method and device by suppressing uninteresting information and storage device
CN114004295A (en) Small sample image data expansion method based on countermeasure enhancement
CN114037843A (en) Method for improving resolution of underwater image based on improved generation countermeasure network
CN112836729A (en) Construction method of image classification model and image classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Ma Hongbin

Inventor after: Ma Yanlong

Inventor after: Wang Yingli

Inventor before: Ma Yanlong

Inventor before: Ma Hongbin

Inventor before: Wang Yingli

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant