CN111241958B - Video image identification method based on residual error-capsule network - Google Patents

Video image identification method based on residual error-capsule network Download PDF

Info

Publication number
CN111241958B
CN111241958B CN202010008315.6A CN202010008315A CN111241958B CN 111241958 B CN111241958 B CN 111241958B CN 202010008315 A CN202010008315 A CN 202010008315A CN 111241958 B CN111241958 B CN 111241958B
Authority
CN
China
Prior art keywords
capsule
residual error
features
network
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010008315.6A
Other languages
Chinese (zh)
Other versions
CN111241958A (en
Inventor
陈波
冯婷婷
张勇
邓媛丹
吴思璠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010008315.6A priority Critical patent/CN111241958B/en
Publication of CN111241958A publication Critical patent/CN111241958A/en
Application granted granted Critical
Publication of CN111241958B publication Critical patent/CN111241958B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video image identification method based on a residual error-capsule network, and belongs to an image classification technology in the field of computer vision image processing. The method constructs a residual error-capsule neural network by a residual error neural network for extracting the potential features of the image, a capsule network for coding the corresponding relation between a local part and a whole object and a decoder for reconstructing the image, mainly solves the problems of overfitting and gradient disappearance in a convolutional neural network, reconstructs an original input image by an output vector of the capsule network, and discriminates and classifies models according to the matching degree of the reconstruction and the original image, thereby further improving the detection performance of forged face images and videos.

Description

Video image identification method based on residual error-capsule network
Technical Field
The invention relates to the technical field of image processing, in particular to a video image identification method based on a residual error-capsule network.
Background
Residual neural Networks (Residual Networks) are easy to optimize and converge quickly, and can improve accuracy by adding considerable depth. The inner residual block uses jump connection, and the problem of gradient disappearance caused by depth increase in a deep neural network is relieved.
Convolutional Neural Networks (CNNs) are a class of feed forward Neural Networks (fed Neural Networks) that include convolution computations and have a deep structure, and are one of the representative algorithms of deep learning (deep learning). The successful application of convolutional neural networks in object recognition and classification tasks is favored by the computer vision application community. CNNs are composed of multiple neurons stacked together. Computing the convolution between neurons requires a large amount of computation, and therefore pooling is often used to reduce the size of the network layer. Convolution methods can learn many complex features of data through simple calculations. Its artificial neuron can respond to peripheral units in a part of coverage range, and has excellent performance for large-scale image processing. The application fields include computer vision, natural language processing and the like.
The traditional convolutional neural network has good detection performance in the aspect of extracting important features, and is difficult to pay attention to the relative relationship (such as position, proportion, direction, size and skewness) between a local part and a whole object, so that some important position information is lost. How to correctly classify and maintain the corresponding relation between the part and the whole is a key problem in solving the image classification problem.
The capsule neural network (CapsNets) is a brand-new deep learning system structure, overcomes the defects of CNN, and is a new promising network structure. The capsule represents various characteristics of a specific entity in the image, such as position, size, direction, speed, hue, texture, etc., existing as a single logical unit, and then using a protocol routing algorithm, when the capsule passes its learned and predicted data to the capsule of the highest level, if the prediction is consistent, the higher level capsule becomes active, a process called dynamic routing. With the continuous iteration of the routing mechanism, various capsules can be trained into logic units for learning different thinking, the neural network is enabled to identify the face, and different parts of the face are respectively routed to the capsules capable of understanding eyes, nose, mouth and ears. Compared with the traditional neural network, the capsule network has the following characteristics.
Based on the development of deep learning, there is a risk of being attacked by faces of counterfeit legitimate users. Under the great trend of deep learning, a batch of high-quality false image video generation technologies appear. Recently, compared with the deep fake technology, the Face2Face technology, the GAN and variant technology and the like, the abuse of the technologies causes potential safety hazards in the financial industry, so that the identification of forged image videos is a key link in the field of financial anti-fraud.
The digital media evidence obtaining method is mainly based on texture, motion information and multispectral characteristics. A common method for detecting forgery is to analyze the difference between an image generated by GAN and a real image; detecting GAN by using the co-occurrence matrix to generate an image; the color difference between the image generated by using the GAN and the real image in a non-RGB color space; detecting depfake-generated false video with such bio-signals based on detection of blinks in the video; detecting a forged video through a unique artifact left by the inconsistency of the resolution of the twisted surface area and the resolution of the surrounding environment; identifying the false by utilizing the inconsistency of the head postures; the authenticity of the image is distinguished based on the foreground and background correlation analysis of an optical flow method, and the authenticity is judged based on the difference of the spectral reflectivity of skin and other materials; and extracting the motion information of the face area from the video to judge the true and false face based on the local mode of the diffusion speed. The detection methods indicate the importance of texture characteristics, motion information and multispectral characteristics to a certain extent. The disadvantages are that: texture features are susceptible to illumination, image resolution, and the like. The motion information is widely applied, but is easily attacked at low risk by an attacker who makes corresponding instructions according to requirements after hollowing out the mouth and eyes of the facial picture. The multispectral characteristics have strict requirements on lighting, the user experience brought by the presented multispectral images is poorer, and the cost is higher than that of a visible light system.
With the development of deep neural networks, the authentication method can be based on the detection of learning features. A double-flow network-based face tampering detection method is characterized in that a GoogLeNet and a patch-based triple network are trained respectively to detect a forged face by utilizing two flows of feature capture local noise residual and camera features, and a mixed method of a convolutional neural network and a capsule network is used for detection; a new network is proposed, which is a CNN-based network, for detecting face tampering in video; the method based on learning is used for universal operation detection, does not depend on preselected characteristics, and can automatically learn how to detect various image operations or any preprocessing; XceptionNet is a traditional CNN trained on ImageNet based on separable convolutions with residual connections. Compared with texture features, motion information and multispectral features, the detection method based on the learning features is less affected by external factors, high in detection rate and good in detection performance on forged faces. Most are authenticated for specific counterfeit scenes. The neural network result which can be applied to various fake scenes is provided, the problems of gradient disappearance and overfitting in the classification problem are solved, and the generalization capability of the identification network is improved.
Disclosure of Invention
Aiming at the problems of overfitting, gradient disappearance and the like in a convolutional neural network adopted by the existing classification method, the invention provides a video image identification method based on a residual error-capsule network.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a video image identification method based on a residual error-capsule network comprises the following steps:
s1, inputting a video image containing a human face and preprocessing the video image;
s2, extracting potential network features in the face image by adopting a pre-trained residual neural network model to obtain a face potential feature map;
s3, extracting the features of the potential face feature map obtained in the step S2 by adopting a capsule network model;
and S4, reconstructing the feature map of the features extracted in the step S3 by adopting a decoder, and identifying and classifying the video images according to the matching degree of the reconstructed visual feature map and the potential feature map of the human face obtained in the step S2.
Further, the preprocessing the video image including the face in the step S1 specifically includes: and adopting a Dlib face recognition positioning library to perform face positioning on the video image containing the face, and cutting the detected face into a set size.
Further, the residual neural network model in step S2 includes a two-dimensional convolution unit, a pooling unit, a first residual layer and a second residual layer, which are connected in sequence, where the first residual layer includes three residual blocks, the second residual layer includes four residual blocks, and each residual block includes two residual blocks.
Further, in the step S2, the features in the face image are extracted by using the two-dimensional convolution unit in the residual neural network model, and then the potential features in the face image are extracted sequentially through the first residual layer and the second residual layer.
Further, the capsule network model in step S3 includes extracting capsules and outputting capsules; the extraction capsule adopts three parallel feature extraction modules to extract features, performs superposition operation on the extracted features, performs compression operation, and finally sends feature information to an output capsule through a dynamic routing algorithm; the output capsule adopts a real capsule and a false capsule as classified capsules for true and false identification.
Further, the feature extraction module comprises a two-dimensional convolution unit, a statistical pool unit and a one-dimensional convolution unit, wherein a two-dimensional normalization unit and a ReLU activation function are arranged behind the two-dimensional convolution unit, and a one-dimensional normalization unit and an output unit are arranged behind the one-dimensional convolution unit.
Further, in the step S4, the decoder performs feature map reconstruction on the extracted features by using a two-layer feedforward neural network, and constructs a residual error-capsule network structure in a full-connection decoder manner together with the capsule network model.
Further, the feature extraction module comprises two-dimensional convolution units, wherein a two-dimensional normalization unit and a ReLU activation function are arranged behind the first two-dimensional convolution unit, and a two-dimensional normalization unit and an output unit are arranged behind the second two-dimensional convolution unit.
Further, in step S4, the decoder performs feature map reconstruction on the extracted features by using three two-dimensional deconvolution units, and constructs a residual error-capsule network structure in a deconvolution decoder mode together with the capsule network model.
Further, the loss function for performing identification and classification on the video image adopts an edge loss function and a reconstruction loss function to form a total loss function, which is expressed as:
Figure BDA0002356129710000051
wherein L ismarginRepresenting the edge loss function, L _ recon representing the reconstruction loss function, λ representing the reconstruction loss function weight, TkRepresents class k, vkIndicating k classes of output capsules, x _ recon indicating reconstructed input features, x indicating input features, N indicating number of input features, m+,m-The scaling coefficients of the positive and negative examples are shown separately.
The invention has the following beneficial effects:
(1) the residual error neural network which is more stable than the traditional convolution neural network is adopted to carry out primary feature extraction on the preprocessed image, so that the extracted potential features have more feature points;
(2) two different reconstruction implementation modes are constructed by fusing a capsule network, namely a network structure in a full-connection decoder mode and a network structure in a deconvolution decoder mode, so that the performance of counterfeit detection is improved to the maximum extent;
(3) random Gaussian noise and compression operation are added in a consistency dynamic routing algorithm of the capsule network, so that the problems of overfitting and gradient disappearance are solved;
(4) in the whole image identification process, the feature maps of the images before and after reconstruction are visualized, so that the network structure can be conveniently adjusted and understood in the network training process.
Drawings
FIG. 1 is a schematic flow chart of the video image identification method based on the residual error-capsule network of the present invention;
FIG. 2 is a schematic diagram of a residual neural network according to an embodiment of the present invention;
FIG. 3 is a diagram of different layer activation states in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a residual error-capsule network structure of a fully-connected decoder approach in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a residual-capsule network structure of a deconvolution decoder approach in an embodiment of the present invention;
FIG. 6 is a flow chart of a dynamic routing algorithm in an embodiment of the present invention;
FIG. 7 is a comparison graph of pre-and post-reconstruction features in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
As shown in fig. 1, the embodiment of the present invention discloses a method for identifying a video image based on a residual error-capsule network, which is characterized by comprising the following steps S1 to S4:
s1, inputting a video image containing a human face and preprocessing the video image;
in this embodiment, after the video image including the face is input, the video image including the face is preprocessed, and the specific process is as follows:
adopting a Dlib face recognition positioning library to carry out face positioning on a video image containing a face, and cutting the detected face into a set size; here the face image is scaled to 128 x 128 size by resize.
S2, extracting potential network features in the face image by adopting a pre-trained residual neural network model to obtain a face potential feature map;
in this embodiment, the present invention uses the first or second layer of the pre-trained residual neural network model to extract the potential network features and use them as input to the capsule network model, as shown in fig. 2.
The residual neural network model comprises a two-dimensional convolution unit, a pooling unit, a first residual layer and a second residual layer which are sequentially connected, wherein the first residual layer comprises three residual blocks, the second residual layer comprises four residual blocks, and each residual block comprises two convolution blocks.
And extracting the features in the face image by using a two-dimensional convolution unit in the residual neural network model, and extracting the potential features in the face image sequentially through the first residual layer and the second residual layer.
As shown in fig. 2, the features of the face image are extracted by 2d convolution of 7x7, the face image sequentially passes through a first residual error layer and a second residual error layer of a residual error network, and finally the extracted potential network features are output, so that 128 feature maps with the size of 16x16 are obtained. Where 3x3 Conv,64 represents a Conv2d () function with a convolution kernel of 3x3, the residual block is made up of two 2d convolutions of 3x 3.
The method tests the influence of the features extracted from different layers of the residual error network on the final identification detection performance of the network on the deepfake data set, as shown in table 1.
TABLE 1 Deepfake data set test results
Resnet18_Acc Resnet34_Acc
Layer1 90% 93%
Layer2 91% 95%
Layer3 83% 88%
Experiments and comparisons show that the characteristics extracted by the residual error network at the second layer are best as input detection performance, and the training result is more stable.
As shown in fig. 3, the feature activation state diagrams of the ResNet34 neural network Layer1, Layer2 and Layer3 are sequentially arranged from left to right. The comparison shows that the first layer and the second layer have more activated features compared with the third layer, and the second layer is selected as a feature extraction layer in combination with the comprehensive consideration of experiments.
The method for extracting the potential network characteristics in the face image by adopting the residual neural network model has the advantages that:
(1) by adopting the transfer learning mode, the time for training the model is reduced, the pre-trained model uses knowledge learned by a well-trained network on a large set by using the transfer learning mode, and the pre-trained model is applied to improve the performance of the detector on a smaller data set, so that higher initial precision, higher convergence speed and higher approximation precision can be achieved.
(2) The residual error network replaces the traditional convolutional neural network, so that on one hand, more position information can be prevented from being lost in the convolutional neural network; on the other hand, the residual network can realize the combination of different resolution features, shallow layers easily have the features with high resolution but low level semantics, and deep layers have the features with high level semantics but low resolution.
(3) The training is guided as a regularizer to reduce overfitting.
S3, extracting the features of the potential face feature map obtained in the step S2 by adopting a capsule network model;
in the present embodiment, the Capsule network model includes an extract Capsule (extract Capsule) and an Output Capsule (Output Capsule).
The capsule extraction adopts three parallel feature extraction modules to extract features, the extracted features are subjected to superposition (stack) operation, then compression (square) operation is carried out, and finally feature information is sent to an output capsule through a dynamic routing algorithm.
The output Capsule adopts Real Capsule (Real Capsule) and Fake Capsule (Fake Capsule) as classified capsules for true and false identification.
The extraction capsule can be realized by adopting a full-connection type capsule network layer or a deconvolution type capsule network layer.
Referring to fig. 4, each feature extraction module in the fully-connected capsule network layer comprises a two-dimensional convolution unit, a statistic pool unit and a one-dimensional convolution unit, wherein a two-dimensional normalization unit and a ReLU activation function are arranged behind the two-dimensional convolution unit, and a one-dimensional normalization unit and an output unit are arranged behind the one-dimensional convolution unit.
Each feature extraction module in the capsule extracts features by a convolution kernel with the size of 3x3 and the size of 2d, performs data normalization processing for 2d, performs data pooling by a statistics pool stats firing, extracts features by a convolution kernel with the size of 5x5, performs data normalization processing for 1d, and converts the final result into one-dimensional vector output.
The invention helps to make the network independent of the size of the input image by setting the statistical pool, which means that one network structure can be applied to different problems of different input sizes without redesigning the network, and the mean and variance of each filter are calculated in the statistical pool layer.
Referring to fig. 5, each feature extraction module in the deconvolution type capsule network layer includes two-dimensional convolution units, wherein a two-dimensional normalization unit and a ReLU activation function are disposed behind a first two-dimensional convolution unit, and a two-dimensional normalization unit and an output unit are disposed behind a second two-dimensional convolution unit. By removing the statistics pool, more convolution information is retained.
Extracting the features of each feature extraction module in the capsule by using a convolution kernel with the size of 3x3 and the size of 2d, and then performing 2d data normalization processing; extracting features through a 2d convolution kernel with the size of 5x5, and then performing 2d data normalization processing; and extracting features through 2d convolution kernels with the size of 3x3, performing 2d data normalization processing, and converting a final result into a one-dimensional vector output.
The method comprises the steps of extracting potential features through a second layer of the residual error neural network, outputting the size of 128 x16 to the capsule network, extracting the features simultaneously by adopting three same feature extraction modules in the capsule network layer, superposing the extracted information in the last dimension, and compressing after superposition. The compression operation normalizes each element in the vector to be between 0 and 1. Where the compression function square () is represented as:
Figure BDA0002356129710000091
wherein v isjIs the vector output of capsule j, sjIs its input.
The implementation of Extract Capsule and Output Capsule in the Capsule network of the present invention uses dynamic routing algorithm (dynamic routing algorithm) to dynamically calculate at runtime by the protocol, and the result will be Output to the appropriate Output Capsule. According to the experiment, the effect that the iterative routing time is 2 times is better. The dynamic routing mechanism is used for determining where the information is mainly sent, and the consistency between capsules calculated by the dynamic routing algorithm can well describe the hierarchical attitude relationship between object parts, so that the accuracy between visual tasks is improved.
Let the Capsule output vector of Extract Capsule be u(i)Real Capsule of Output Capsule is called true Capsule v(1)Fake capsule called pseudo capsule v(2),W(i,j)Is to be u(i)Route to v(j)R is the number of iterations;
the Output of the Capsule network layer transfers information to the next Capsule layer through a dynamic routing algorithm, as shown in fig. 6, a dynamic routing process for sending information from an extra Capsule to an Output Capsule is shown in detail, and the parameter update and iteration process of the dynamic routing are shown in the figure.
In the dynamic routing process, firstly, non-wires are adoptedCapsule (u) output by sexual activation function square pair(i)) Perform a square () operation, represented as
Figure BDA0002356129710000101
Eq.2;
For is to
Figure BDA0002356129710000102
To
Figure BDA0002356129710000103
Make affine transformation, expressed as
Figure BDA0002356129710000104
Eq.1;
Adding random Gaussian noise to the three-dimensional tensor W, expressed as
Figure BDA0002356129710000105
Figure BDA0002356129710000106
Eq.3;
To pair
Figure BDA0002356129710000107
Perform a dropout operation, denoted as
Figure BDA0002356129710000108
Eq.4; by pairing output capsules (u)(i)) Perform the operation of square () and
Figure BDA0002356129710000109
and performing dropout operation, so that the training process is more stable. The compression function is used to scale the vector size to a single length.
The parameters needing to be updated in the whole routing process are W(i,j)According to a back propagation algorithm; b is a mixture ofi,jAnd cijAccording to the dynamic route consistency principle, bi,iInitialized to 0 and updated to
Figure BDA00023561297100001010
Eq.6;cijUpdate the formula to
Figure BDA00023561297100001011
Eq.5;
Finally, weight addition and solution s are carried outjIs shown as
Figure BDA0002356129710000111
Eq.7;
To sjPerforming nonlinear activation function square operation to output capsule vector v(i)Is shown as v(j)←squash(sj)。
For prediction output capsule after routing algorithm
Figure BDA0002356129710000112
Applying each dimension capsule vector output by the softmax function to realize strong polarization instead of simply using length output capsules, adjusting the capsules in a maximization mode, wherein the final result is an average value of all softmax outputs, and the calculation formula is as follows:
Figure BDA0002356129710000113
where m represents the number of dimensions.
And S4, reconstructing the feature map of the features extracted in the step S3 by adopting a decoder, and identifying and classifying the video images according to the matching degree of the reconstructed visual feature map and the potential feature map of the human face obtained in the step S2.
In this embodiment, the present invention uses a decoder to reconstruct the input from the final capsule, forcing the network to hold as much information from the entire network input as possible, acting as an effective regularizer, reducing the risk of overfitting.
The decoder part adopts two implementation modes:
(1) fully-connected decoder: and (3) reconstructing the extracted features by adopting two layers of feedforward neural networks, and constructing a residual error-capsule network structure in a fully-connected decoder mode together with the capsule network model.
The network structure of the fully-connected decoder part adopts two linear functions to map and finally processes the mapping by using a sigmoid function.
(2) A deconvolution-style decoder: and (3) reconstructing the feature graph of the extracted features by adopting three two-dimensional deconvolution units, and constructing a residual error-capsule network structure in a deconvolution decoder mode together with the capsule network model.
The network structure of the deconvolution type decoder part sequentially adopts deconvolution kernels with the size of 3x3 and 2 d; a size 2d deconvolution kernel of 6x 6; a size 2d deconvolution kernel of 3x3 is processed.
Three loss functions are used throughout: and (3) an edge Loss function Margin Loss, a Reconstruction Loss function Reconstruction Loss, and a Total Loss which is Margin Loss and Reconstruction Loss, wherein the Loss is summed to avoid the influence of the Reconstruction Loss on the calculation of Margin Loss.
The edge Loss function, Margin Loss, is expressed as:
Lk=Tkmax(0,m+-||vk||)2+λ(1-Tk)max(0,||vk||-m-)2
wherein T isk1 denotes k, m+,m-Respectively represent the proportionality coefficients, m, of positive and negative examples+=0.9,m-0.1. λ is to reduce the loss of classes that do not appear in the picture. The present invention uses λ ═ 0.5. The total loss is the sum of the losses of all output capsules.
The Reconstruction Loss function Reconstruction Loss is expressed as:
Figure BDA0002356129710000121
the total loss function is thus expressed as:
Figure BDA0002356129710000122
wherein L ismarginRepresenting the edge loss function, L _ recon representing the reconstruction loss function, λ representing the reconstruction loss function weight, TkRepresents class k, vkRepresenting k classes of output capsules, x _ recon representing reconstructed input features, x representing input features, N representing the number of input features, m+,m-The scaling coefficients of the positive and negative examples are shown separately.
The invention visualizes the reconstructed image output by the capsule network and the input of the capsule network. The visualized image feature map is compared to be more beneficial to understanding the feature operation of the network on the image and adjusting the whole network parameters, so that the precision of image classification is improved.
Compared with the characteristic diagrams before and after reconstruction, the characteristic points of the reconstructed image are more obvious, the network is forced to store information input from the whole network as much as possible by using the reconstruction loss as an effective regularizer, and the overfitting risk can be effectively reduced. The reconstructed visualization feature map of the image is shown in fig. 7.
To further illustrate the effectiveness of the method of the present invention, the present invention performs image classification and original image reconstruction experiments using the Deepfake data set and the Deepfake _ Detection data set.
In the experiment, the Deepfake data set is the data set in FaceForenses + +, which includes 977 videos downloaded from youtube, 1000 original sequences extracted, which contain a face that can be easily tracked without any problems. Detailed training data are shown in table 1.
A deep forgery Detection dataset Deepfake _ Detection, provided by google and JigSaw, has been currently hosted, containing over 3000 processed videos from 28 different scenes, as detailed in table 2.
TABLE 1 Deepfake data set
Real(youtube) Fake(deepfake)
Train 4000 4000
Val 500 500
Test 500 500
TABLE 2 Deepfake _ Detection dataset
Figure BDA0002356129710000131
Figure BDA0002356129710000141
The experiment was trained on PC of GTX1060 TI; during training, Adam is selected as an optimizer, and the learning rate is 0.0005; during testing, the same data sets are used for testing in original capsules respectively, and the result shows that the performance of the reconstructed network structure in the two data sets is superior to that of the original counterfeit detection network result. The test results are shown in table 3.
TABLE 3 comparison of test results of different models on different data sets
Model (model) Deepfake_Acc Deepfake_Detection_Acc
Original (original) 95.70% 85.34%
Full connection type 96.30% 87.53%
Deconvolution formula 98.60% 88.16%
The comparison of the characteristic diagrams of the data set before and after reconstruction shows that the reconstructed image characteristic points are more obvious, the reconstruction loss is used as an effective regularizer to force the network to store information input from the whole network as much as possible, and the overfitting risk can be effectively reduced.
The experimental results show that: the complexity of the data set can directly influence the identification performance, the scene of the Deepfake data set is simple relative to that of the Deepfake _ Detection data set provided by Google, the Deepfake data set is a positive picture of a person, and the identification performance is good. The reconstruction idea is superior to the idea of cross entropy loss to a certain extent, and the model is identified according to the matching degree of the reconstruction and the original image, so that the accuracy of identification and classification can be effectively improved.
The invention improves the performance in the classification task of the authenticity identification, solves the problems of overfitting and gradient disappearance in the neural network, and improves the authenticity identification performance by fusing the reconstruction network structure of the capsule network.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art, having the benefit of this disclosure, may effect numerous modifications thereto and changes may be made without departing from the scope of the invention in its aspects.

Claims (5)

1. A video image identification method based on a residual error-capsule network is characterized by comprising the following steps:
s1, inputting a video image containing a human face and preprocessing the video image;
s2, extracting potential network features in the face image by adopting a pre-trained residual neural network model to obtain a face potential feature map; the residual error neural network model comprises a two-dimensional convolution unit, a pooling unit, a first residual error layer and a second residual error layer which are sequentially connected, wherein the first residual error layer comprises three residual error blocks, the second residual error layer comprises four residual error blocks, and each residual error block comprises two convolution blocks; extracting features in the face image by using a two-dimensional convolution unit in a residual neural network model, and extracting potential features in the face image sequentially through a first residual layer and a second residual layer;
s3, extracting the features of the potential face feature map obtained in the step S2 by adopting a capsule network model; the capsule network model comprises an extraction capsule and an output capsule; the extraction capsule adopts three parallel feature extraction modules to extract features, performs superposition operation on the extracted features, performs compression operation, and finally sends feature information to an output capsule through a dynamic routing algorithm; the output capsule adopts a real capsule and a false capsule as classified capsules for true and false identification;
s4, reconstructing a feature map of the features extracted in the step S3 by adopting a decoder, and identifying and classifying the video images according to the matching degree of the reconstructed visual feature map and the potential face feature map obtained in the step S2; the decoder adopts two layers of feedforward neural networks to reconstruct a characteristic diagram of the extracted characteristics, and a residual error-capsule network structure in a full-connection decoder mode is constructed together with the capsule network model; or the decoder adopts three two-dimensional deconvolution units to reconstruct the feature graph of the extracted features, and the feature graph and the capsule network model together construct a residual error-capsule network structure in a deconvolution decoder mode.
2. The method for identifying video images based on the residual error-capsule network as claimed in claim 1, wherein the step S1 of preprocessing the video images including human faces specifically comprises: and adopting a Dlib face recognition positioning library to perform face positioning on the video image containing the face, and cutting the detected face into a set size.
3. The method for video image discrimination based on the residual-capsule network as claimed in claim 1, wherein said feature extraction module comprises a two-dimensional convolution unit followed by a two-dimensional normalization unit and a ReLU activation function, a statistics pool unit and a one-dimensional convolution unit followed by a one-dimensional normalization unit and an output unit.
4. The method of claim 1, wherein the feature extraction module comprises two-dimensional convolution units, wherein a first two-dimensional convolution unit is followed by a two-dimensional normalization unit and a ReLU activation function, and a second two-dimensional convolution unit is followed by a two-dimensional normalization unit and an output unit.
5. The method for discriminating a video image based on a residual error capsule network as claimed in claim 1, wherein said loss function for discriminating and classifying a video image uses an edge loss function and a reconstruction loss function to form a total loss function, which is expressed as:
Figure FDA0003684718530000021
wherein L ismarginRepresenting the edge loss function, L _ recon representing the reconstruction loss function, λ representing the reconstruction loss function weight, TkDenotes k class, vkRepresenting k classes of output capsules, x _ recon representing reconstructed input features, x representing input features, N representing the number of input features, m+,m-The scaling coefficients for positive and negative examples are shown separately.
CN202010008315.6A 2020-01-06 2020-01-06 Video image identification method based on residual error-capsule network Expired - Fee Related CN111241958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010008315.6A CN111241958B (en) 2020-01-06 2020-01-06 Video image identification method based on residual error-capsule network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010008315.6A CN111241958B (en) 2020-01-06 2020-01-06 Video image identification method based on residual error-capsule network

Publications (2)

Publication Number Publication Date
CN111241958A CN111241958A (en) 2020-06-05
CN111241958B true CN111241958B (en) 2022-07-22

Family

ID=70876020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010008315.6A Expired - Fee Related CN111241958B (en) 2020-01-06 2020-01-06 Video image identification method based on residual error-capsule network

Country Status (1)

Country Link
CN (1) CN111241958B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222457B (en) * 2020-01-06 2023-06-16 电子科技大学 Detection method for identifying authenticity of video based on depth separable convolution
US12008071B2 (en) 2020-07-17 2024-06-11 Tata Consultancy Services Limited System and method for parameter compression of capsule networks using deep features
CN112036281B (en) * 2020-07-29 2023-06-09 重庆工商大学 Facial expression recognition method based on improved capsule network
CN112069891B (en) * 2020-08-03 2023-08-18 武汉大学 Deep fake face identification method based on illumination characteristics
CN111967427A (en) * 2020-08-28 2020-11-20 广东工业大学 Fake face video identification method, system and readable storage medium
CN112036494A (en) * 2020-09-02 2020-12-04 公安部物证鉴定中心 Gun image identification method and system based on deep learning network
CN112085734B (en) * 2020-09-25 2022-02-01 西安交通大学 GAN-based image restoration defect detection method
CN112232261A (en) * 2020-10-27 2021-01-15 上海眼控科技股份有限公司 Method and device for fusing image sequences
CN112507783A (en) * 2020-10-29 2021-03-16 上海交通大学 Mask face detection, identification, tracking and temperature measurement method based on attention mechanism
CN112256878B (en) * 2020-10-29 2024-01-16 沈阳农业大学 Rice knowledge text classification method based on deep convolution
CN112487989B (en) * 2020-12-01 2022-07-15 重庆邮电大学 Video expression recognition method based on capsule-long-and-short-term memory neural network
CN112733701A (en) * 2021-01-07 2021-04-30 中国电子科技集团公司信息科学研究院 Robust scene recognition method and system based on capsule network
CN113343886A (en) * 2021-06-23 2021-09-03 贵州大学 Tea leaf identification grading method based on improved capsule network
CN113283393B (en) * 2021-06-28 2023-07-25 南京信息工程大学 Deepfake video detection method based on image group and two-stream network
CN113610108B (en) * 2021-07-06 2022-05-20 中南民族大学 Rice pest identification method based on improved residual error network
CN113807232B (en) * 2021-09-14 2022-09-30 广州大学 Fake face detection method, system and storage medium based on double-flow network
CN114241245B (en) * 2021-12-23 2024-05-31 西南大学 Image classification system based on residual capsule neural network
CN114339398A (en) * 2021-12-24 2022-04-12 天翼视讯传媒有限公司 Method for real-time special effect processing in large-scale video live broadcast
CN115082928B (en) * 2022-06-21 2024-04-30 电子科技大学 Method for asymmetric double-branch real-time semantic segmentation network facing complex scene
WO2024108505A1 (en) * 2022-11-24 2024-05-30 深圳先进技术研究院 Myocardial fibrosis classification method based on residual capsule network
CN116030454B (en) * 2023-03-28 2023-07-18 中南民族大学 Text recognition method and system based on capsule network and multi-language model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875618A (en) * 2018-06-08 2018-11-23 高新兴科技集团股份有限公司 A kind of human face in-vivo detection method, system and device
CN108898577A (en) * 2018-05-24 2018-11-27 西南大学 Based on the good malign lung nodules identification device and method for improving capsule network
CN109086728A (en) * 2018-08-14 2018-12-25 成都智汇脸卡科技有限公司 Biopsy method
CN110009097A (en) * 2019-04-17 2019-07-12 电子科技大学 The image classification method of capsule residual error neural network, capsule residual error neural network
CN110516576A (en) * 2019-08-20 2019-11-29 西安电子科技大学 Near-infrared living body faces recognition methods based on deep neural network
CN110533004A (en) * 2019-09-07 2019-12-03 哈尔滨理工大学 A kind of complex scene face identification system based on deep learning
CN110570353A (en) * 2019-08-27 2019-12-13 天津大学 Dense connection generation countermeasure network single image super-resolution reconstruction method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9082235B2 (en) * 2011-07-12 2015-07-14 Microsoft Technology Licensing, Llc Using facial data for device authentication or subject identification
KR101286454B1 (en) * 2012-02-29 2013-07-16 주식회사 슈프리마 Fake face identification apparatus and method using characteristic of eye image
CN107977932B (en) * 2017-12-28 2021-04-23 北京工业大学 Face image super-resolution reconstruction method based on discriminable attribute constraint generation countermeasure network
CN108985316B (en) * 2018-05-24 2022-03-01 西南大学 Capsule network image classification and identification method for improving reconstruction network
CN110443867B (en) * 2019-08-01 2022-06-10 太原科技大学 CT image super-resolution reconstruction method based on generation countermeasure network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898577A (en) * 2018-05-24 2018-11-27 西南大学 Based on the good malign lung nodules identification device and method for improving capsule network
CN108875618A (en) * 2018-06-08 2018-11-23 高新兴科技集团股份有限公司 A kind of human face in-vivo detection method, system and device
CN109086728A (en) * 2018-08-14 2018-12-25 成都智汇脸卡科技有限公司 Biopsy method
CN110009097A (en) * 2019-04-17 2019-07-12 电子科技大学 The image classification method of capsule residual error neural network, capsule residual error neural network
CN110516576A (en) * 2019-08-20 2019-11-29 西安电子科技大学 Near-infrared living body faces recognition methods based on deep neural network
CN110570353A (en) * 2019-08-27 2019-12-13 天津大学 Dense connection generation countermeasure network single image super-resolution reconstruction method
CN110533004A (en) * 2019-09-07 2019-12-03 哈尔滨理工大学 A kind of complex scene face identification system based on deep learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Huy H.Nguyen等.Capsule-forensics: Using capsule networks to detect forged images and videos.《2019 IEEE International Conference on Acoustic,Speech and Signal Processing (ICASSP)》.2019, *
Huy H.Nguyen等.Use of a capsule network to detect fake images and videos.《arxiv》.2019, *
佟越洋等.基于卷积神经网络的活体人脸检测算法研究.《中国优秀硕士学位论文全文数据库 (信息科技辑)》.2019, *
杨泽.伪造数字图像检测算法研究.《中国优秀硕士学位论文全文数据库 (信息科技辑)》.2019, *
陈健等.基于胶囊网络的汉字笔迹鉴定算法.《包装学报》.2018,第10卷(第5期), *

Also Published As

Publication number Publication date
CN111241958A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111241958B (en) Video image identification method based on residual error-capsule network
US11645835B2 (en) Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications
Liu et al. Hard negative generation for identity-disentangled facial expression recognition
De Rezende et al. Exposing computer generated images by using deep convolutional neural networks
Larsen et al. Autoencoding beyond pixels using a learned similarity metric
CN112766158B (en) Multi-task cascading type face shielding expression recognition method
Mallouh et al. Utilizing CNNs and transfer learning of pre-trained models for age range classification from unconstrained face images
Demir et al. Where do deep fakes look? synthetic face detection via gaze tracking
WO2020103700A1 (en) Image recognition method based on micro facial expressions, apparatus and related device
Tian et al. Ear recognition based on deep convolutional network
CN109886881B (en) Face makeup removal method
CN111444881A (en) Fake face video detection method and device
WO2019227479A1 (en) Method and apparatus for generating face rotation image
JP2021528728A (en) Face image recognition using pseudo images
CN109740539B (en) 3D object identification method based on ultralimit learning machine and fusion convolution network
Zhang et al. Learning upper patch attention using dual-branch training strategy for masked face recognition
CN113989890A (en) Face expression recognition method based on multi-channel fusion and lightweight neural network
Chen et al. Mask dynamic routing to combined model of deep capsule network and u-net
Tong et al. Adaptive weight based on overlapping blocks network for facial expression recognition
Sharma et al. IPDCN2: Improvised Patch-based Deep CNN for facial retouching detection
Vallez et al. Diffeomorphic transforms for data augmentation of highly variable shape and texture objects
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
CN115457374B (en) Deep pseudo-image detection model generalization evaluation method and device based on reasoning mode
Althbaity et al. Colorization Of Grayscale Images Using Deep Learning
Tunc et al. Age group and gender classification using convolutional neural networks with a fuzzy logic-based filter method for noise reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220722

CF01 Termination of patent right due to non-payment of annual fee