CN110414670B - Image splicing tampering positioning method based on full convolution neural network - Google Patents

Image splicing tampering positioning method based on full convolution neural network Download PDF

Info

Publication number
CN110414670B
CN110414670B CN201910593383.0A CN201910593383A CN110414670B CN 110414670 B CN110414670 B CN 110414670B CN 201910593383 A CN201910593383 A CN 201910593383A CN 110414670 B CN110414670 B CN 110414670B
Authority
CN
China
Prior art keywords
image
layer
pooling
convolution
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910593383.0A
Other languages
Chinese (zh)
Other versions
CN110414670A (en
Inventor
陈北京
吴韵清
吴鹏
高野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN201910593383.0A priority Critical patent/CN110414670B/en
Publication of CN110414670A publication Critical patent/CN110414670A/en
Application granted granted Critical
Publication of CN110414670B publication Critical patent/CN110414670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image splicing tampering positioning method based on a full convolution neural network. Establishing a splicing tampered image library; initializing an image splicing tampering positioning network based on a full convolution neural network, and setting a training process of the network; initializing network parameters; reading a training image, carrying out training operation on the training image, and outputting a splicing positioning prediction result of the training image; calculating an error value between the training image splicing positioning prediction result and the real label, and adjusting network parameters until the error value meets the precision requirement; post-processing the prediction result with the accuracy meeting the requirement by using a conditional random field, adjusting network parameters, and outputting a final prediction result of a training image; and reading the test image, predicting the test image by adopting a trained network, carrying out post-processing on the prediction result through a conditional random field, and outputting the final prediction result of the test image. The method has the advantages of high splicing tampering positioning precision, low network training difficulty and easy convergence of a network model.

Description

Image splicing tampering positioning method based on full convolution neural network
Technical Field
The invention belongs to the technical field of deep learning, and particularly relates to an image splicing tampering positioning method.
Background
With the development of science and technology, digital images are widely used in various fields such as news, business, medical imaging, forensic investigation, and the like. On the other hand, various image processing software represented by Photoshop, ACDSee, Freehand and the like can be used by more and more non-professionals, and the authenticity of digital images faces a serious challenge. The passive evidence obtaining technology in the digital image evidence obtaining technology has practical value and practical significance because no pretreatment is needed. Among various tampering modes, splicing is the most common means, and particularly refers to an operation of copying a part of an area from one image and then pasting the part of the area to another image, which has a great influence on the content of an original image. How to improve the accuracy of splicing and positioning is a research hotspot and key point in the field of digital image evidence obtaining research at present.
The traditional splicing tampering positioning method mainly focuses on feature extraction and region matching, one or more features are often extracted manually, and then a classification algorithm is adopted to judge whether a whole image or a local region of the image is tampered or not. Ng et al propose that the amplitude mean value and the negative phase entropy of the normalized bispectrum of the image can be used as effective features to judge whether the image is spliced and tampered [ Ngtt, Chang S F, Sun Q. Blind Detection of Photonic age Using high Order Statistics [ C ]// Proceedings of the 2004International Symposium on Circuits and systems. Vancouver, Canada: IEEE,2004: 688-. Johnson et al establish a low-order approximately linear model between image intensity and a Complex illumination environment, and propose a method for estimating environmental illumination parameters from an image [ Johnson M K.exposing Digital formations in Complex illumination Environments [ J ]. IEEE Transactions on Information formations and Security,2007,2(3):450-461 ], although the effect is better than the algorithm proposed by Ng et al, specific tampered areas cannot be identified. Considering that images in Splicing tampering originate from different cameras, which have different Camera Response functions, Hsu et al propose a tampering positioning method Using CRF as a matching feature [ Hsu Y F, Chang S F. image Splicing Detection Using Camera Response Function and Automatic Segmentation [ C ]// Proceedings of the 2007IEEE International Conference on Multimedia and Expo. Beijing, China: IEEE,2007:28-31 ]. The method first checks the attributes of the boundaries between regions within an image and then determines whether the entire image has been tampered with based on the results of the boundary check. The method has low accuracy of tamper detection at the boundary level, and fails to tamper the homologous camera. The accuracy is directly influenced by the quality of the selected characteristics, the accuracy is obviously limited, and the positioning accuracy is generally low.
The technology based on deep learning trains the automatic learning features through the network, and the trouble that the manual design features are time-consuming and labor-consuming in the initial stage is reduced. Liu et al proposed using a fully convolutional neural network (FCN) instead of CNN and combining Conditional Random Fields (CRF) to fuse the positioning results of the three FCNs, although the accuracy is improved, the CRF and FCN are independent from each other and are not an end-to-end structure. Chen and the like change the use mode of the CRF, enhance the learning of the target area, enable the whole network to become an end-to-end learning system, improve the positioning accuracy, but as the depth of the network model is deepened, the training speed of the model is continuously reduced, and it is more and more difficult to train an ideal model in huge data sets.
Based on the above, the existing traditional image stitching tampering positioning method has the problem that the positioning accuracy needs to be improved, and the positioning method based on deep learning has the problem that the deep complex model is difficult to train.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides an image splicing tampering positioning method based on a full convolution neural network.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
an image splicing tampering positioning method based on a full convolution neural network comprises the following steps:
(1) establishing a splicing and tampering image library which comprises training images and test images;
(2) initializing an image splicing tampering positioning network based on a full convolution neural network, and setting a training process of the network;
(3) initializing parameters in an image splicing tampering positioning network;
(4) reading a training image, carrying out training operation on the training image according to a set network training process, and outputting a splicing positioning prediction result of the training image;
(5) calculating an error value between a training image splicing positioning prediction result and a real label, and adjusting parameters in an image splicing tampering network until the error value meets the precision requirement;
(6) post-processing the prediction result with the accuracy meeting the requirement by using a conditional random field, adjusting parameters in the image splicing and tampering network, and outputting a final prediction result of a training image;
(7) reading a test image, adopting a trained image splicing tampering positioning network to predict the test image, carrying out post-processing on a prediction result through a conditional random field, and outputting a final prediction result of the test image.
Further, in the step (1), the training image and the test image both comprise two types of images, namely a spliced and tampered color image and a corresponding binaryzation real label image; and the black part of the binarized real label image corresponds to the spliced tampered area of the color image, and the white part corresponds to the spliced tampered area of the color image.
Further, in step (2), initializing the image stitching tamper locating network based on the full convolution neural network is as follows:
the image splicing tampering positioning network is sequentially provided with 1 convolution block, 4 residual blocks with the same structure and 1 reverse convolution block; adjacent blocks are connected through 1 pooling layer; the convolution block comprises 2 convolution layers, and each convolution layer is immediately followed by 1 activation layer; each residual block comprises 3 convolution layers, wherein 1 active layer is respectively and immediately connected to the last convolution layer after the first two convolution layers, and 1 active layer is immediately connected to the output of the last convolution layer after the output of the first active layer of the residual block is added; the deconvolution block comprises 3 convolution layers, wherein 1 activation layer and 1 Dropout layer are respectively and immediately connected to the first two convolution layers, and 1 deconvolution layer is immediately connected to the last convolution layer, so that double-time up-sampling operation is realized, and output 1 is obtained; the output of the second residual block after passing through the pooling layer passes through 1 convolution layer and 1 pooling layer, is added with the output 1 and then is immediately connected with 1 deconvolution layer, so that double upsampling operation is realized, and an output 2 is obtained; the output of the third residual block after passing through the pooling layer passes through 1 convolution layer and 1 pooling layer, and is added with the output 2 and then is immediately connected with 1 deconvolution layer, thereby realizing eight times of upsampling operation and obtaining the output of the whole network.
Further, in the step (2), the step of setting the training process of the network is as follows:
(2-1) splicing and tampering the training image input image with a positioning network, and performing convolution activation on the input image by using a convolution kernel in a convolution block to obtain a feature map of the convolution block;
(2-2) pooling the feature map of the volume block, and obtaining the feature map of the first pooling layer through maximal pooling with a pooling window of 2 x 2;
(2-3) performing convolution activation on the feature map of the first pooling layer by using the convolution kernel of the first residual block to obtain a feature map of the first residual block;
(2-4) pooling the feature map of the first residual block, and obtaining a feature map of a second pooling layer through maximal pooling with a pooling window of 2 × 2;
(2-5) performing convolution activation on the feature map of the second pooling layer by using the convolution kernel of the second residual block to obtain a feature map of the second residual block;
(2-6) pooling the feature map of the second residual block, and obtaining a feature map of a third pooling layer through maximal pooling with a pooling window of 2 × 2;
(2-7) performing convolution activation on the feature map of the third pooling layer by using the convolution kernel of the third residual block to obtain a feature map of the third residual block;
(2-8) pooling the feature map of the third residual block, and obtaining a feature map of a fourth pooling layer through maximal pooling with a pooling window of 2 × 2;
(2-9) performing convolution activation on the feature map of the fourth pooling layer by using the convolution kernel of the fourth residual block to obtain a feature map of the fourth residual block;
(2-10) pooling the feature map of the fourth residual block, and obtaining a feature map of a fifth pooling layer through maximal pooling with a pooling window of 2 × 2;
(2-11) performing convolution activation on the feature map of the fifth pooling layer by using a convolution kernel of the deconvolution block, and then performing twice deconvolution to obtain a feature map 1 of the deconvolution block, wherein the feature map 1 has the same size as the feature map of the fourth pooling layer;
(2-12) performing convolution activation on the feature map of the fourth pooling layer, adding the feature map of the deconvolution block to the feature map 1 of the deconvolution block, and performing twice deconvolution to obtain a feature map 2 of the deconvolution block, wherein the feature map 2 has the same size as the feature map of the third pooling layer;
(2-13) performing convolution activation on the feature map of the third pooling layer, adding the feature map of the deconvolution block to the feature map 2 of the deconvolution block, and performing eight-time deconvolution to obtain a feature map 3 of the deconvolution block, wherein the feature map has the same size as the input training image and corresponds to the splicing tampering positioning prediction result of each pixel point of the input training image.
Further, in step (3), the parameters in the image stitching tamper locating network are initialized as follows:
firstly, loading trained parameters of the VGG-16 convolutional layer, and directly loading corresponding parameters of the VGG-16 on the convolutional layer corresponding to the VGG-16 in the convolutional block and the 4 residual blocks in the image splicing, tampering and positioning network; parameters of a third convolution layer in the second residual block, all convolution layers of the deconvolution block, a convolution layer through which the output of the second residual block passes after passing through the pooling layer, and a convolution layer through which the output of the third residual block passes after passing through the pooling layer are all initialized randomly according to normal distribution; setting a learning rate alpha; setting the weight to be adjusted once each batchsize training sample is used; an iteration period epoch is set.
Further, the specific process of step (4) is as follows:
(4-1) forward propagation stage: from trainingReading arbitrary training image data x in set xkInputting the real label into an image splicing tampering positioning network, and marking the real label corresponding to the training image in the training set as Ok
(4-2) convolution activation process: sequentially convolving input training images and various characteristic graphs obtained by the training images through each block in the network with a trainable filter, and adding an offset:
Figure GDA0003084569500000051
wherein the content of the first and second substances,
Figure GDA0003084569500000052
a jth feature map representing the output of the training image on the convolution layer; m represents a set of l-1 convolutional layer output characteristic graphs;
Figure GDA0003084569500000053
a jth feature map representing the output at l-1 convolutional layer;
Figure GDA0003084569500000054
a filter corresponding to the jth characteristic diagram of the convolutional layer;
Figure GDA0003084569500000061
representing the bias corresponding to the jth characteristic diagram of the convolution layer; (x) represents the linear modified unit activation function ReLU and has:
Figure GDA0003084569500000062
wherein a is U (0, U), U is more than 0 and less than 1, U (0, U) is the average distribution of U, and a is a random number sampled from the uniformly distributed U (0, U);
(4-3) a pooling process: adopting a maximum pooling model, taking the maximum value in a pooling domain as a characteristic graph after sub-sampling pooling, namely:
Spq=maxp=1,q=1(Fpq)+b
wherein, FpqFor inputting the characteristic diagram matrix, p and q respectively represent the row number and column number of the matrix, the subsampling pooling domain is a 2 x 2 matrix, b is the offset, SpqThe characteristic diagram after the sub-sampling pooling is obtained; maxp=1,q=1(Fpq) Representing the input feature graph matrix FpqIs the maximum value taken out of the pooling domain of 2X 2
(4-4) deconvolution: replacing the full-link layer in conformity with the above convolution activation process;
(4-5) calculating n input feature graph points to multiply the weight matrix corresponding to each layer to obtain a training image xkCorresponding output Yk
Yk=Fn(...(F2(F1(xkW(1))W(2))...)W(n))
Wherein, FiRepresenting the ith input profile matrix, W(i)The weight matrix corresponding to the ith layer, i is 1,2, …, n.
Further, the specific process of step (5) is as follows:
(5-1) a back propagation stage: obtaining an output Y after training of an image splicing tampering positioning network according to a training imagekWith a genuine label OkThe error value between, calculate the output error value E of the k training imagekNamely:
Ek=H(Ok,Yk)+λR(w)
Figure GDA0003084569500000071
Figure GDA0003084569500000072
where λ is the loss of model complexity at EkThe proportion of the hyper-parameters, R (w), reflects the complexity of the model, w is all the weights in the image splicing tampering positioning network,
Figure GDA0003084569500000073
symbol is2The square of the norm; p is the number of training images in the training set;
(5-2) outputting the error value EkJudgment EkWhether the stability is within 0.01 +/-0.005 in one iteration cycle; if so, the prediction result meets the precision requirement, and the training is finished; otherwise, returning to the step (3).
Further, in step (6), post-processing of the prediction result by the conditional random field is implemented by:
E(x)=∑iψu(xi)+∑i≠jψp(xi,xj)
Figure GDA0003084569500000074
wherein E (x) is a conditional random field energy function, ψu(xi) Is each pixel xiA probability of belonging to a certain class, being a data item of a conditional random field; psip(xi,xj) The difference of gray values and the space distance between two pixels are smooth terms of the conditional random field, and the smooth terms can be expressed as the sum of a plurality of Gaussian functions; mu (x)i,xj) Is a compatibility parameter; w is a(m)Is the weight of the mth class, and K is the number of classes; k is a radical of(m)(fi,fj) Is a Gaussian function corresponding to class m, fiAnd fjThe feature vectors corresponding to the pixel point i and the pixel point j are as follows:
Figure GDA0003084569500000075
wherein p isiAnd pjRespectively, position information of two points, IiAnd IjColor information of two points, respectively, thetaα、θβAnd thetaγIs a correlation parameter between two points; the part before the plus sign represents bilateral filtering, and the part after the plus sign represents spaceFiltering, v(1)And v(2)Is a weight parameter.
Adopt the beneficial effect that above-mentioned technical scheme brought:
(1) experiments prove that compared with the traditional method and other deep learning-based methods, the method provided by the invention has higher positioning precision;
(2) although the network structure is complex, the network training difficulty is low, the model convergence speed is high in the network training stage, and the overall effect of the trained model is excellent.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is an illustration of a stitched tamper image library, comprising two sets (a), (b);
FIG. 3 is a diagram of an image stitching tamper location network architecture in accordance with the present invention;
fig. 4 is a network architecture diagram of the VGG-16.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings.
The prediction precision of splicing tampering positioning is improved by adopting a deep learning technology, and the deep learning technology is mainly suitable for processing visual information and is a deep convolutional neural network which is a supervised learning method; in the embodiment, a residual error module is also designed, so that the model training difficulty is reduced, and the convergence speed is increased; and finally, optimizing the prediction effect by post-processing based on the conditional random field.
An image stitching tamper positioning method based on a full convolution neural network, as shown in fig. 1, includes the following steps:
step 1: establishing a splicing and tampering image library comprising a training image and a test image;
step 2: initializing an image splicing tampering positioning network based on a full convolution neural network, and setting a training process of the network;
and step 3: initializing parameters in an image splicing tampering positioning network;
and 4, step 4: reading training image data, performing training operations of convolution, activation, pooling and deconvolution on the training image data according to the training process of the image splicing tampering positioning network, outputting a splicing positioning prediction result of the training image,
and 5: calculating an error value Loss between a training image splicing positioning prediction result and a real label, and adjusting parameters in an image splicing tampering network based on a full convolution neural network until the error value meets the precision requirement;
step 6: post-processing the network prediction result with the accuracy meeting the requirement by using a Conditional Random Field (CRF), adjusting parameters in an image splicing and tampering network, and outputting a final splicing positioning prediction result;
and 7: and reading the test image, adopting the trained image splicing tampering positioning network as the test network, and outputting a splicing tampering positioning prediction result of the test image.
In this embodiment, the step 1 is implemented by the following preferred scheme:
the training image and the test image both comprise two types of images, namely a spliced and tampered color image and a corresponding binaryzation real label image; the black part of the binarized real label image corresponds to the untampered area of the color image, and the white part corresponds to the tampered area of the color image, and fig. 2 shows 2 sets of samples of the spliced and tampered color image and the corresponding binarized real label image.
In this embodiment, the step 2 is implemented by the following preferred scheme:
a splicing tampering positioning network structure based on a full convolution neural network is shown in fig. 3, and 1 convolution block, 4 residual blocks with the same structure and 1 reverse convolution block are sequentially arranged; adjacent blocks are connected through 1 pooling layer; the convolution block comprises 2 convolution layers, and each convolution layer is immediately followed by 1 activation layer; each residual block comprises 3 convolution layers, wherein 1 active layer is respectively and immediately connected to the last convolution layer after the first two convolution layers, and 1 active layer is immediately connected to the output of the last convolution layer after the output of the first active layer of the residual block is added; the deconvolution block comprises 3 convolution layers, wherein 1 activation layer and 1 Dropout layer are respectively and immediately connected to the first two convolution layers, and 1 deconvolution layer is immediately connected to the last convolution layer, so that double-time up-sampling operation is realized, and output 1 is obtained; the output of the second residual block after passing through the pooling layer passes through 1 convolution layer and 1 pooling layer, is added with the output 1 and then is immediately connected with 1 deconvolution layer, so that double upsampling operation is realized, and an output 2 is obtained; the output of the third residual block after passing through the pooling layer passes through 1 convolution layer and 1 pooling layer, and is added with the output 2 and then is immediately connected with 1 deconvolution layer, thereby realizing eight times of upsampling operation and obtaining the output of the whole network.
The steps of the training process for setting up the network are as follows:
2-1, splicing and tampering the training image input image with a positioning network, and performing convolution activation on the input image by using a convolution kernel in a convolution block to obtain a feature map of the convolution block;
2-2, pooling the feature map of the convolution block, and obtaining a feature map of a first pooling layer through maximal pooling with a pooling window of 2 multiplied by 2;
2-3, carrying out convolution activation on the feature map of the first pooling layer by using the convolution kernel of the residual block 1 to obtain the feature map of the residual block 1;
2-4, pooling the characteristic diagram of the residual block 1, and obtaining the characteristic diagram of a second pooling layer through maximal pooling with a pooling window of 2 multiplied by 2;
2-5, performing convolution activation on the feature map of the second pooling layer by using the convolution kernel of the residual block 2 to obtain the feature map of the residual block 2;
2-6, pooling the feature map of the residual block 2, and obtaining a feature map of a third pooling layer through maximal pooling with a pooling window of 2 x 2;
2-7, carrying out convolution activation on the feature map of the third pooling layer by using the convolution kernel of the residual block 3 to obtain the feature map of the residual block 3;
2-8, pooling the feature map of the residual block 3, and obtaining a feature map of a fourth pooling layer through maximal pooling with a pooling window of 2 × 2;
2-9, carrying out convolution activation on the feature map of the fourth pooling layer by using the convolution kernel of the residual block 4 to obtain the feature map of the residual block 4;
2-10, pooling the feature map of the residual block 4, and obtaining a feature map of a fifth pooling layer through maximal pooling with a pooling window of 2 × 2;
2-11, performing convolution activation on the feature map of the fifth pooling layer by using a convolution kernel of the deconvolution block, and then performing twice deconvolution to obtain a feature map 1 of the deconvolution block, wherein the feature map 1 has the same size as the feature map of the fourth pooling layer;
2-12, performing convolution activation on the feature map of the fourth pooling layer, adding the feature map of the deconvolution block to the feature map 1 of the deconvolution block, and performing twice deconvolution to obtain a feature map 2 of the deconvolution block, wherein the feature map 2 has the same size as the feature map of the third pooling layer;
and 2-13, performing convolution activation on the feature map of the third pooling layer, adding the feature map of the third pooling layer to the feature map 2 of the deconvolution block, and performing eight-time deconvolution to obtain a feature map 3 of the deconvolution block, wherein the feature map has the same size as the input training image and corresponds to the splicing tampering positioning prediction result of each pixel point of the input training image.
In this embodiment, the step 3 is implemented by the following preferred scheme:
firstly, loading the trained parameters of the VGG-16 convolutional layer, and directly loading the corresponding parameters of the VGG-16 by the convolutional layer corresponding to the VGG-16 in the convolutional block and the four residual blocks, wherein a network structure of the VGG-16 is shown in FIG. 4. All parameters of a third convolution layer in the second residual block, all convolution layers of the deconvolution block, a convolution layer through which an output result of the second residual block passes through the pooling layer, and a convolution layer through which an output result of the third residual block passes through the pooling layer are initialized randomly according to normal distribution; setting a learning rate alpha; setting the weight to be adjusted once each batchsize training sample is used; an iteration period epoch is set.
In this embodiment, the step 4 is implemented by adopting the following preferred scheme:
the specific process of step 4 is as follows:
4-1, forward propagation phase: reading arbitrary training image data x from training set xkInputting the real label into an image splicing tampering positioning network, and marking the real label corresponding to the training image in the training set as Ok
4-2, convolution activation process: sequentially convolving input training images and various characteristic graphs obtained by the training images through each block in the network with a trainable filter, and adding an offset:
Figure GDA0003084569500000111
wherein the content of the first and second substances,
Figure GDA0003084569500000112
a jth feature map representing the output of the training image on the convolution layer; m represents a set of l-1 convolutional layer output characteristic graphs;
Figure GDA0003084569500000121
a jth feature map representing the output at l-1 convolutional layer;
Figure GDA0003084569500000122
a filter corresponding to the jth characteristic diagram of the convolutional layer;
Figure GDA0003084569500000123
representing the bias corresponding to the jth characteristic diagram of the convolution layer; (x) represents the linear modified unit activation function ReLU and has:
Figure GDA0003084569500000124
wherein a is U (0, U), U is more than 0 and less than 1, U (0, U) is the average distribution of U, and a is a random number sampled from the uniformly distributed U (0, U);
4-3, a pooling process: adopting a maximum pooling model, taking the maximum value in a pooling domain as a characteristic graph after sub-sampling pooling, namely:
Spq=maxp=1,q=1(Fpq)+b
wherein, FpqFor inputting the characteristic diagram matrix, p and q respectively represent the row number and column number of the matrix, the subsampling pooling domain is a 2 x 2 matrix, b is the offset, SpqAfter pooling for sub-samplingA feature map; maxp=1,q=1(Fpq) Representing the input feature graph matrix FpqIs the maximum value taken out of the pooling domain of 2 × 2;
4-4, deconvolution process: replacing the full-link layer in conformity with the above convolution activation process;
4-5, calculating n input feature graph points to multiply the weight matrix corresponding to each layer to obtain a training image xkCorresponding output Yk
Yk=Fn(...(F2(F1(xkW(1))W(2))...)W(n))
Wherein, FiRepresenting the ith input profile matrix, W(i)The weight matrix corresponding to the ith layer, i is 1,2, …, n.
In this embodiment, the step 5 is implemented by the following preferred scheme:
the specific process of step 5 is as follows:
5-1, a back propagation stage: obtaining an output Y after training of an image splicing tampering positioning network according to a training imagekWith a genuine label OkThe error value between, calculate the output error value E of the k training imagekNamely:
Ek=H(Ok,Yk)+λR(w)
Figure GDA0003084569500000131
Figure GDA0003084569500000132
where λ is the loss of model complexity at EkThe proportion of the hyper-parameters, R (w), reflects the complexity of the model, w is all the weights in the image splicing tampering positioning network,
Figure GDA0003084569500000133
symbol is2The square of the norm; p is trainingThe number of images to be trained in the set;
5-2 according to the output error value EkJudgment EkWhether the stability is within 0.01 +/-0.005 in one iteration cycle; if so, the prediction result meets the precision requirement, and the training is finished; otherwise, returning to the step 3.
In this embodiment, the step 6 is implemented by the following preferred scheme:
post-processing the prediction result by the conditional random field is realized by the following formula:
E(x)=∑iψu(xi)+∑i≠jψp(xi,xj)
Figure GDA0003084569500000134
wherein E (x) is a conditional random field energy function, ψu(xi) Is each pixel xiA probability of belonging to a category, being a data item of a CRF; psip(xi,xj) The difference of gray values and the space distance between two pixels are smoothing terms of the CRF, and the smoothing terms can be expressed as the sum of a plurality of Gaussian functions; mu (x)i,xj) Is a compatibility parameter; w is a(m)Is a weight of class m; k is a radical of(m)(fi,fj) Is a Gaussian function corresponding to class m, fiAnd fjThe feature vectors corresponding to the pixel point i and the pixel point j are as follows:
Figure GDA0003084569500000135
wherein p isiAnd pjRespectively, position information of two points, IiAnd IjColor information of two points, respectively, thetaα、θβAnd thetaγIs a correlation parameter between two points; the part before the addition sign represents bilateral filtering, and the idea is that pixels close in space and close in gray value possibly belong to the same object; the signed part represents spatial filtering, the idea being to remove one of the resultsCarrying out spatial smoothing on the isolated small areas; v. of(1)And v(2)Is a weight parameter.
In order to verify the effect of the invention, the proposed network model is firstly trained based on the public data set CASIA v2.0, the positioning performance of the algorithm is directly tested, then cross verification is carried out on the other two public data sets CASIA v1.0 and DVMM, and the experimental results are shown in Table 1. Compared with the traditional method and other deep learning-based methods, the method provided by the invention has higher positioning accuracy.
TABLE 1
Serial number Algorithm CASIA v2.0 CASIA v1.0 DVMM
1 ADQ 0.1752 0.2053 0.4975
2 NADQ 0.1423 0.1763 0.1682
3 DCT 0.2581 0.3005 0.5199
4 BLK 0.2316 0.2312 0.5234
5 CFA1 0.1852 0.2073 0.4667
6 CFA2 0.2045 0.2125 0.5031
7 NOI1 0.2414 0.2633 0.5740
8 NOI2 0.2075 0.2302 0.5318
9 FCN32 0.2602 0.3021 0.3003
10 FCN16 0.4023 0.3954 0.3865
11 FCN8 0.4912 0.4764 0.4661
12 FCN32+ residual 0.3055 0.2833 0.5701
13 FCN16+ residual 0.4259 0.4012 0.5803
14 FCN8+ residual 0.5035 0.4701 0.5757
15 FCN32+ residual + CRF 0.3398 0.3212 0.5943
16 FCN16+ residual + CRF 0.4868 0.4643 0.6004
17 The method of the present invention 0.5643 0.5530 0.6112
In addition, although the network structure is complex, the network training difficulty is low, and the network model is easy to converge. With the deepening of the network depth, errors are accumulated continuously, and the network convergence is difficult. Because the neural network essentially maps a vector a of a certain spatial dimension into another spatial dimension through nonlinear transformation H (a), the network training process is optimization H (a); since h (a) is difficult to optimize, the residual form f (a) ═ h (a) -a of h (a), which is easier to learn, is introduced, so that the network convergence speed is faster when training deeper networks. In each residual block, the input is directly connected to the output, which is similar to a 'high-speed channel', the output of each layer is not only the mapping of the input, but also the superposition of the mapping and the input, the prior characteristics of the current layer are increased, the information transmission in the network is enhanced, the model convergence speed is higher in the network training phase, and the overall effect of the trained model is better.
The embodiments are only for illustrating the technical idea of the present invention, and the technical idea of the present invention is not limited thereto, and any modifications made on the basis of the technical scheme according to the technical idea of the present invention fall within the scope of the present invention.

Claims (6)

1. An image splicing tampering positioning method based on a full convolution neural network is characterized by comprising the following steps:
(1) establishing a splicing and tampering image library which comprises training images and test images;
(2) initializing an image splicing tampering positioning network based on a full convolution neural network, and setting a training process of the network;
the initialized image splicing tampering positioning network based on the full convolution neural network is as follows:
the image splicing tampering positioning network is sequentially provided with 1 convolution block, 4 residual blocks with the same structure and 1 reverse convolution block; adjacent blocks are connected through 1 pooling layer; the convolution block comprises 2 convolution layers, and each convolution layer is immediately followed by 1 activation layer; each residual block comprises 3 convolution layers, wherein 1 active layer is respectively and immediately connected to the last convolution layer after the first two convolution layers, and 1 active layer is immediately connected to the output of the last convolution layer after the output of the first active layer of the residual block is added; the deconvolution block comprises 3 convolution layers, wherein 1 activation layer and 1 Dropout layer are respectively and immediately connected to the first two convolution layers, and 1 deconvolution layer is immediately connected to the last convolution layer, so that double-time up-sampling operation is realized, and output 1 is obtained; the output of the second residual block after passing through the pooling layer passes through 1 convolution layer and 1 pooling layer, is added with the output 1 and then is immediately connected with 1 deconvolution layer, so that double upsampling operation is realized, and an output 2 is obtained; the output of the third residual block after passing through the pooling layer passes through 1 convolution layer and 1 pooling layer, is added with the output 2 and then is immediately connected with 1 deconvolution layer, thereby realizing eight times of upsampling operation and obtaining the output of the whole network;
(3) initializing parameters in an image splicing tampering positioning network;
(4) reading a training image, carrying out training operation on the training image according to a set network training process, and outputting a splicing positioning prediction result of the training image; the specific process of the step is as follows:
(4-1) forward propagation stage: reading arbitrary training image data x from training set xkInputting the real label into an image splicing tampering positioning network, and marking the real label corresponding to the training image in the training set as Ok
(4-2) convolution activation process: sequentially convolving input training images and various characteristic graphs obtained by the training images through each block in the network with a trainable filter, and adding an offset:
Figure FDA0003160254100000021
wherein the content of the first and second substances,
Figure FDA0003160254100000022
a jth feature map representing the output of the training image on the convolution layer; m represents a set of l-1 convolutional layer output characteristic graphs;
Figure FDA0003160254100000023
a jth feature map representing the output at l-1 convolutional layer;
Figure FDA0003160254100000024
a filter corresponding to the jth characteristic diagram of the convolutional layer;
Figure FDA0003160254100000025
representing the bias corresponding to the jth characteristic diagram of the convolution layer; (x) represents the linear modified unit activation function ReLU and has:
Figure FDA0003160254100000026
wherein a is U (0, U), U is more than 0 and less than 1, U (0, U) is the average distribution of U, and a is a random number sampled from the uniformly distributed U (0, U);
(4-3) a pooling process: adopting a maximum pooling model, taking the maximum value in a pooling domain as a characteristic graph after sub-sampling pooling, namely:
Spq=maxp=1,q=1(Fpq)+b
wherein, FpqFor inputting the characteristic diagram matrix, p and q respectively represent the matrixRow and column numbers of, subsampling pooling field is a 2 x 2 matrix, b is offset, SpqThe characteristic diagram after the sub-sampling pooling is obtained; maxp=1,q=1(Fpq) Representing the input feature graph matrix FpqIs the maximum value taken out of the pooling domain of 2 × 2;
(4-4) deconvolution: replacing the full-link layer in conformity with the above convolution activation process;
(4-5) calculating n input feature graph points to multiply the weight matrix corresponding to each layer to obtain a training image xkCorresponding output Yk
Yk=Fn(...(F2(F1(xkW(1))W(2))...)W(n))
Wherein, FiRepresenting the ith input profile matrix, W(i)Is the weight matrix corresponding to the ith layer, i is 1,2, …, n;
(5) calculating an error value between a training image splicing positioning prediction result and a real label, and adjusting parameters in an image splicing tampering network until the error value meets the precision requirement;
(6) post-processing the prediction result with the accuracy meeting the requirement by using a conditional random field, adjusting parameters in the image splicing and tampering network, and outputting a final prediction result of a training image;
(7) reading a test image, adopting a trained image splicing tampering positioning network to predict the test image, carrying out post-processing on a prediction result through a conditional random field, and outputting a final prediction result of the test image.
2. The image splicing tampering positioning method based on the full convolution neural network as claimed in claim 1, wherein in the step (1), the training image and the test image both comprise two types of images, namely a spliced tampered color image and a corresponding binarized real label image; and the black part of the binarized real label image corresponds to the spliced tampered area of the color image, and the white part corresponds to the spliced tampered area of the color image.
3. The image stitching tamper localization method based on the full convolution neural network as claimed in claim 1, wherein in the step (2), the step of setting the training process of the network is as follows:
(2-1) splicing and tampering the training image input image with a positioning network, and performing convolution activation on the input image by using a convolution kernel in a convolution block to obtain a feature map of the convolution block;
(2-2) pooling the feature map of the volume block, and obtaining the feature map of the first pooling layer through maximal pooling with a pooling window of 2 x 2;
(2-3) performing convolution activation on the feature map of the first pooling layer by using the convolution kernel of the first residual block to obtain a feature map of the first residual block;
(2-4) pooling the feature map of the first residual block, and obtaining a feature map of a second pooling layer through maximal pooling with a pooling window of 2 × 2;
(2-5) performing convolution activation on the feature map of the second pooling layer by using the convolution kernel of the second residual block to obtain a feature map of the second residual block;
(2-6) pooling the feature map of the second residual block, and obtaining a feature map of a third pooling layer through maximal pooling with a pooling window of 2 × 2;
(2-7) performing convolution activation on the feature map of the third pooling layer by using the convolution kernel of the third residual block to obtain a feature map of the third residual block;
(2-8) pooling the feature map of the third residual block, and obtaining a feature map of a fourth pooling layer through maximal pooling with a pooling window of 2 × 2;
(2-9) performing convolution activation on the feature map of the fourth pooling layer by using the convolution kernel of the fourth residual block to obtain a feature map of the fourth residual block;
(2-10) pooling the feature map of the fourth residual block, and obtaining a feature map of a fifth pooling layer through maximal pooling with a pooling window of 2 × 2;
(2-11) performing convolution activation on the feature map of the fifth pooling layer by using a convolution kernel of the deconvolution block, and then performing twice deconvolution to obtain a feature map 1 of the deconvolution block, wherein the feature map 1 has the same size as the feature map of the fourth pooling layer;
(2-12) performing convolution activation on the feature map of the fourth pooling layer, adding the feature map of the deconvolution block to the feature map 1 of the deconvolution block, and performing twice deconvolution to obtain a feature map 2 of the deconvolution block, wherein the feature map 2 has the same size as the feature map of the third pooling layer;
(2-13) performing convolution activation on the feature map of the third pooling layer, adding the feature map of the deconvolution block to the feature map 2 of the deconvolution block, and performing eight-time deconvolution to obtain a feature map 3 of the deconvolution block, wherein the feature map has the same size as the input training image and corresponds to the splicing tampering positioning prediction result of each pixel point of the input training image.
4. The image stitching, tampering and positioning method based on the full convolution neural network as claimed in claim 1, wherein in step (3), the parameters in the image stitching, tampering and positioning network are initialized as follows:
firstly, loading trained parameters of the VGG-16 convolutional layer, and directly loading corresponding parameters of the VGG-16 on the convolutional layer corresponding to the VGG-16 in the convolutional block and the 4 residual blocks in the image splicing, tampering and positioning network; parameters of a third convolution layer in the second residual block, all convolution layers of the deconvolution block, a convolution layer through which the output of the second residual block passes after passing through the pooling layer, and a convolution layer through which the output of the third residual block passes after passing through the pooling layer are all initialized randomly according to normal distribution; setting a learning rate alpha; setting the weight to be adjusted once each batchsize training sample is used; an iteration period epoch is set.
5. The image stitching, tampering and positioning method based on the full convolution neural network as claimed in claim 1, wherein the specific process of step (5) is as follows:
(5-1) a back propagation stage: obtaining an output Y after training of an image splicing tampering positioning network according to a training imagekWith a genuine label OkThe error value between, calculate the output error value E of the k training imagekNamely:
Ek=H(Ok,Yk)+λR(w)
Figure FDA0003160254100000051
Figure FDA0003160254100000052
where λ is the loss of model complexity at EkThe proportion of the hyper-parameters, R (w), reflects the complexity of the model, w is all the weights in the image splicing tampering positioning network,
Figure FDA0003160254100000053
symbol is2The square of the norm; p is the number of training images in the training set;
(5-2) outputting the error value EkJudgment EkWhether the stability is within 0.01 +/-0.005 in one iteration cycle; if so, the prediction result meets the precision requirement, and the training is finished; otherwise, returning to the step (3).
6. The image stitching tamper localization method based on the full convolution neural network as claimed in any one of claims 1 to 5, wherein in step (6), the post-processing of the prediction result by the conditional random field is realized by the following formula:
E(x)=∑cψu(xc)+∑c≠dψp(xc,xd)
Figure FDA0003160254100000054
wherein E (x) is a conditional random field energy function, ψu(xc) Is each pixel xcA probability of belonging to a certain class, being a data item of a conditional random field; psip(xc,xd) Is the difference of gray values and spatial distance between two pixels, is a smoothing term of the conditional random field, and can be expressedShown as the sum of several gaussian functions; mu (x)c,xd) Is a compatibility parameter; w is a(m)Is the weight of the mth class, and K is the number of classes; k is a radical of(m)(fc,fd) Is a Gaussian function corresponding to class m, fcAnd fdThe feature vectors corresponding to the pixel point c and the pixel point d are as follows:
Figure FDA0003160254100000061
wherein p iscAnd pdRespectively, position information of two points, IcAnd IdColor information of two points, respectively, thetaα、θβAnd thetaγIs a correlation parameter between two points; the part before the plus sign represents bilateral filtering, the part after the plus sign represents spatial filtering, v(1)And v(2)Is a weight parameter.
CN201910593383.0A 2019-07-03 2019-07-03 Image splicing tampering positioning method based on full convolution neural network Active CN110414670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910593383.0A CN110414670B (en) 2019-07-03 2019-07-03 Image splicing tampering positioning method based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910593383.0A CN110414670B (en) 2019-07-03 2019-07-03 Image splicing tampering positioning method based on full convolution neural network

Publications (2)

Publication Number Publication Date
CN110414670A CN110414670A (en) 2019-11-05
CN110414670B true CN110414670B (en) 2021-09-28

Family

ID=68360111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910593383.0A Active CN110414670B (en) 2019-07-03 2019-07-03 Image splicing tampering positioning method based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN110414670B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852316B (en) * 2019-11-07 2023-04-18 中山大学 Image tampering detection and positioning method adopting convolution network with dense structure
CN111062931B (en) * 2019-12-20 2021-08-03 河北工业大学 Detection method of spliced and tampered image
CN111311563B (en) * 2020-02-10 2023-06-09 北京工业大学 Image tampering detection method based on multi-domain feature fusion
CN111445454B (en) * 2020-03-26 2023-05-05 江南大学 Image authenticity identification method and application thereof in license identification
CN111640116B (en) * 2020-05-29 2023-04-18 广西大学 Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN111881744B (en) * 2020-06-23 2024-06-21 安徽清新互联信息科技有限公司 Face feature point positioning method and system based on spatial position information
CN112597808A (en) * 2020-06-23 2021-04-02 支付宝实验室(新加坡)有限公司 Tamper detection method and system
CN111915568B (en) * 2020-07-08 2023-07-25 深圳大学 Image tampering positioning model generation method, image tampering positioning method and device
CN111915574B (en) * 2020-07-14 2024-03-22 深圳大学 Photoshop tampered image generation method and system
CN111985324B (en) * 2020-07-14 2022-10-28 广西大学 Road detection method combining full convolution regression neural network and conditional random field
CN112116565B (en) * 2020-09-03 2023-12-05 深圳大学 Method, apparatus and storage medium for generating countersamples for falsifying a flip image
CN112750122B (en) * 2021-01-21 2022-08-02 山东省人工智能研究院 Image tampering area positioning method based on double-current boundary perception neural network
CN113627073B (en) * 2021-07-01 2023-09-19 武汉大学 Underwater vehicle flow field result prediction method based on improved Unet++ network
CN113505509B (en) * 2021-07-08 2022-08-26 河北工业大学 High-precision motor magnetic field prediction method based on improved U-net
CN113673568B (en) * 2021-07-19 2023-08-22 华南理工大学 Method, system, computer device and storage medium for detecting tampered image
CN113781409B (en) * 2021-08-25 2023-10-20 五邑大学 Bolt loosening detection method, device and storage medium
CN113705788B (en) * 2021-08-27 2023-09-22 齐鲁工业大学 Infrared image temperature estimation method and system based on full convolution neural network
CN113920094A (en) * 2021-10-14 2022-01-11 厦门大学 Image tampering detection technology based on gradient residual U-shaped convolution neural network
CN114418840A (en) * 2021-12-15 2022-04-29 深圳先进技术研究院 Image splicing positioning detection method based on attention mechanism
CN117893975B (en) * 2024-03-18 2024-05-28 南京邮电大学 Multi-precision residual error quantization method in power monitoring and identification scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543684A (en) * 2018-10-09 2019-03-29 广州大学 Immediate targets tracking detection method and system based on full convolutional neural networks
CN109754393A (en) * 2018-12-19 2019-05-14 众安信息技术服务有限公司 A kind of tampered image identification method and device based on deep learning
KR101993266B1 (en) * 2018-12-19 2019-06-26 주식회사 로민 method for designing and learning varying model and method for detecting video forgeries therewith

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10423861B2 (en) * 2017-10-16 2019-09-24 Illumina, Inc. Deep learning-based techniques for training deep convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543684A (en) * 2018-10-09 2019-03-29 广州大学 Immediate targets tracking detection method and system based on full convolutional neural networks
CN109754393A (en) * 2018-12-19 2019-05-14 众安信息技术服务有限公司 A kind of tampered image identification method and device based on deep learning
KR101993266B1 (en) * 2018-12-19 2019-06-26 주식회사 로민 method for designing and learning varying model and method for detecting video forgeries therewith

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
An Improved Splicing Localization Method by Fully Convolutional Networks;BEIJING CHEN et al.;《IEEE Access》;20181112;第6卷;全文 *
Bo Liu et al..Locating splicing forgery by fully convolutional networks and conditional random field.《Signal Processing: Image Communication》.2018,第1-25页. *
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs;Liang-Chieh Chen et al.;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20180430;第40卷(第4期);第839页 *
Image Splicing Localization Using A Multi-Task fully convolutional network (MFCN);Ronald Salloum;《arxiv.org》;20170906;全文 *
Locating splicing forgery by fully convolutional networks and conditional random field;Bo Liu et al.;《Signal Processing: Image Communication》;20181231;第9-16页 *
Recasting Residual-based Local Descriptors as Convolutional Neural Networks: an Application to Image Forgery Detection;Davide Cozzolino et al.;《Deep Learning for Media Forensics》;20170622;全文 *
数字图像拼接篡改的盲取证技术研究与应用;刘肖;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315(第3期);第I138-5777页 *

Also Published As

Publication number Publication date
CN110414670A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110414670B (en) Image splicing tampering positioning method based on full convolution neural network
CN112862690B (en) Transformers-based low-resolution image super-resolution method and system
CN111611861B (en) Image change detection method based on multi-scale feature association
Yang et al. Deep feature importance awareness based no-reference image quality prediction
CN114219824A (en) Visible light-infrared target tracking method and system based on deep network
CN117197763A (en) Road crack detection method and system based on cross attention guide feature alignment network
Zhang et al. DuGAN: An effective framework for underwater image enhancement
CN115393698A (en) Digital image tampering detection method based on improved DPN network
CN114494699A (en) Image semantic segmentation method and system based on semantic propagation and foreground and background perception
Zhang et al. Feature compensation network based on non-uniform quantization of channels for digital image global manipulation forensics
CN111881914B (en) License plate character segmentation method and system based on self-learning threshold
Luo et al. A fast denoising fusion network using internal and external priors
CN109583584B (en) Method and system for enabling CNN with full connection layer to accept indefinite shape input
CN116452900A (en) Target detection method based on lightweight neural network
Wu et al. DHGAN: Generative adversarial network with dark channel prior for single‐image dehazing
Liu et al. Recurrent context-aware multi-stage network for single image deraining
Hu et al. Pyramid feature boosted network for single image dehazing
Tojo et al. Image denoising using multi scaling aided double decker convolutional neural network
Zhang et al. DAResNet Based on double-layer residual block for restoring industrial blurred images
Zhang et al. A lightweight CNN based information fusion for image denoising
CN112418120B (en) Crowd detection method based on peak confidence map
CN112287989B (en) Aerial image ground object classification method based on self-attention mechanism
Sharma et al. Image Fusion with Deep Leaning using Wavelet Transformation
Cao et al. A novel image multitasking enhancement model for underwater crack detection
Xu et al. Dense connection decoding network for crisp contour detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant