CN116883805A - Image tampering detection method based on convolutional neural network - Google Patents

Image tampering detection method based on convolutional neural network Download PDF

Info

Publication number
CN116883805A
CN116883805A CN202211624510.7A CN202211624510A CN116883805A CN 116883805 A CN116883805 A CN 116883805A CN 202211624510 A CN202211624510 A CN 202211624510A CN 116883805 A CN116883805 A CN 116883805A
Authority
CN
China
Prior art keywords
image
tampered
layer
convolution
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211624510.7A
Other languages
Chinese (zh)
Inventor
严彩萍
魏华建
李红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Normal University
Original Assignee
Hangzhou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Normal University filed Critical Hangzhou Normal University
Priority to CN202211624510.7A priority Critical patent/CN116883805A/en
Publication of CN116883805A publication Critical patent/CN116883805A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image tampering detection method based on a convolutional neural network. Firstly, collecting and sorting a disclosed image falsification data set, and carrying out data enhancement on the collected falsified image data set through a python script; and then inputting the tampered image data set after data enhancement into a multi-scale integrated attention convolution network for network model training, wherein the composition of the multi-scale integrated attention convolution network comprises a U-net network frame, multi-scale feature map processing, position and channel attention mechanism, and testing the input image by using the trained model to obtain a final tampered positioning result. The invention adopts an end-to-end method, and a tampered area can be obtained by inputting a tampered image, so that any preprocessing and post-processing operation is not needed, and the method is convenient and quick. According to the invention, the difference between the tampered area and the non-tampered area is enlarged by self-attention, so that finer detection of the tampered object is realized, and the training and testing cost is lower.

Description

Image tampering detection method based on convolutional neural network
Technical Field
The invention belongs to the technical field of computers, in particular to the technical field of computer vision and digital image processing, and particularly relates to an image tampering detection method based on a convolutional neural network.
Background
The image is used as a carrier for information transmission, and is increasingly widely applied in life. The authenticity of pictures is becoming increasingly important as people use pictures as an important source of information acquisition. In the fields of public opinion reports, legal evidence collection, etc., images are also often regarded as important and conclusive evidence. However, with the rapid development of image editing software, the authenticity of the picture is not indistinct any more, and non-professional personnel can easily tamper with the picture. Moreover, because the cost of image tampering is low, the concealment is strong, the malicious tampering for transmitting false messages is more and more serious, and great potential safety hazards are brought to society and individuals.
With the development of computer deep learning technology, in recent years, a neural network has made many excellent results in the image field, and has outstanding performances of image classification, semantic segmentation and target detection. Image tampering detection is also a two-class semantic segmentation to a certain extent, so whether tampering detection of digital images can be performed by using a deep learning algorithm is a self-investigation subject. Currently, tamper detection based on an algorithm of deep learning has advanced to some extent, but there are some problems: the method comprises the steps of (1) taking incomplete tampered images and blurring edges; (2) small tamper targets are difficult to detect; (3) After the image is tampered, if post-processing is performed, the detection effect is also easy to lose effectiveness.
Disclosure of Invention
The invention aims at overcoming the defects of the prior art and provides an image tampering detection method based on a convolutional neural network.
The method specifically comprises the following steps:
the method comprises the following steps of (1) collecting and sorting a disclosed image falsification data set; and searching tampered image data sets in the tampered detection field through an open source community, wherein each tampered image data set comprises a tampered image and a tampered label image (group-trunk image) which are respectively placed in two folders, and establishing an original image folder for the data set with the original image, and storing the original image which is not tampered. The tampered image data set includes a CASIA data set, a Columbia data set, and a NIST16 data set.
Step (2) carrying out data enhancement on the collected tampered image data set through a python script; firstly, extracting a tampered area in the tampered image, then scaling the tampered area, rotating and pasting the tampered area to an untampered original image.
And (3) inputting the tampered image data set with the enhanced data into a multi-scale integrated attention convolution network to perform network model training.
The multi-scale integrated attention convolution network comprises a U-net network framework, multi-scale feature map processing, a position and a channel attention mechanism;
the U-net network framework performs 5 times of downsampling on an input 256×256 image to obtain 1024 8×8 feature maps; then, up-sampling is performed 5 more times, and an image of 256×256 original size is output. The specific operation is as follows: in the basic U-net framework, an input image is firstly subjected to coding processing, then is subjected to downsampling by 2 convolution kernels with 3 multiplied by 3, and is subjected to downsampling by one convolution kernel with the step length of 4 multiplied by 4 being 2, so that a characteristic diagram of 64 channels is obtained, and the next layer is provided with a characteristic diagram of 128 channels; in the next three layers, the input feature map is firstly segmented into a plurality of blocks, and the blocks are respectively sent to an attention mechanism. Then the operations of rolling and downsampling are performed. After undergoing a 5-layer operation of the encoder, a decoding operation is performed. The decoder of the network consists of four pairs of up-sampling layers and convolutional layers. The upsampling layer upsamples the feature map with bilinear interpolation. After the upsampling layer, the feature map doubles in width and height. The upsampling layer is followed by a convolution layer with a convolution kernel size of 1 x 1 and a stride of 1, and then a BN layer and a ReLU layer process the upsampled feature map. And jump connection is established between the same layers of the encoder and the decoder, the characteristic diagram of the encoder is cascaded behind the characteristic diagram of the decoder, the channel number is doubled, and the loss of information of the decoder is compensated.
The multi-scale characteristic map processing comprises the following specific operations: the feature map is divided using an odd number of sequences to obtain tiles, the third layer is 5×5, the fourth layer is 3×3, and the fifth layer is 1×1.
The position and the channel attention mechanism are designed from two angles of the channel and the position, and the difference between the tampered area and the non-tampered area is enlarged through a self-attention algorithm.
The specific training process of the multi-scale integrated attention convolution network is as follows:
(1) the data-enhanced tampered image dataset is input into a multi-scale integrated attention convolution network.
(2) The first two layers of the encoder are processed as follows:
for a convolutional neural network, an input image passes through a plurality of components consisting of a convolutional layer, a normalization layer and an activation layer, and 1024 feature maps are finally obtained.
The convolution layer uses k convolution check images to carry out convolution to generate k new feature images; the j-th output feature map for the n-th layerThe specific operation is as follows: />Wherein (1)>And->Respectively representing convolution kernel and bias,>for inputting the feature map->To output a feature map.
The convolution layer is followed by a normalization layer,the data was changed to a distribution with a mean of 0 and a variance of 1 using the batch normalization layer. Realizing data batch b= { x by super parameter { α, β } 1 ,…,x i ,…,x n Each x in } i Conversion to y i Calculated update of x i Representing the channel of the ith layer input from the previous layer, y i Data representing the ith layer of channels after batch normalization operations.
Wherein mu B And->Mean and variance in lot B are shown, respectively, epsilon being the small positive number used to avoid the divisor being 0.
The normalization layer is followed by an activation layer that converts the input feature map by a nonlinear map, uses a Relu activation function, and activates the data result f (z i )=max(0,z i ),z i Is the result of the convolution operation, if z i <0,f(z i ) =0, if z i >0,f(z i )=z i
(3) The last three layers of the encoder are processed as follows:
input F k,x Parallel processing is carried out through a position and channel attention mechanism, and fusion is carried out to obtain a complete result, wherein k=3, 4 and 5 represents the number of layers: first input F k,x Divided into 5 x 5 sub-areas f k,x The method comprises the steps of carrying out a first treatment on the surface of the For each sub-region, calculating the input position attention mechanism of each sub-region to obtain a sub-block f k,y Each sub-block f k,y And then the original size is combined again to obtain F k,y The method comprises the steps of carrying out a first treatment on the surface of the Then input channel attention mechanism, F k,x The input channel self-attention is not segmented any more, and the channel self-attention calculation is directly carried out to obtain an output F k,z The method comprises the steps of carrying out a first treatment on the surface of the Finally f k,y And F k,z And adding to obtain a 1024-layer feature vector.
(4) Decoding the feature map with global features by a decoder to obtain final predictionAs a result. The whole convolution network carries out minimization of a cross entropy loss function through a random gradient descent optimization algorithm, so that a prediction result is optimized, and the cross entropy loss function has the following formula:wherein p is (w,h) Is the probability, y, of pixel pairings at coordinates (w, h) in the stitching region (w,h) Is the label of the true value mask for the current pixel, W is the width of the image, H is the height of the image, w=1, …, W, h=1, …, H.
And (4) testing the input image by using the model trained in the previous step to obtain a final falsified positioning result.
The beneficial effects of the invention include:
(1) The method is an end-to-end method, the tampered area can be obtained by inputting a tampered image, any pretreatment and post-treatment operation are not needed, and the method is convenient and quick;
(2) The method of the invention uses PyCharm software to write codes, and has lower training and testing cost;
(3) According to the method, the difference between the tampered area and the non-tampered area is enlarged through self-attention, so that finer detection of the tampered object is realized;
(4) The method has better effect on small-sized tampering through multi-scale design.
Drawings
FIG. 1 is a system flow diagram of the present invention;
FIG. 2 is a schematic illustration of a tamper image in a dataset;
FIG. 3 is a schematic view of a tamper tag image corresponding to FIG. 2;
fig. 4 is a schematic diagram of detection results for the tampered image of fig. 2.
Detailed Description
The technical scheme of the invention is further described through specific embodiments.
To verify the effect of this method, the following verification was performed. Verification is performed by a computer in a linux environment, and the display card is a Nvidia GeForce RTX 3060 display card of 12G. The system flow diagram is shown in fig. 1.
A plurality of published image tampering data sets are collected and arranged, including a CASIA data set, a NIST16 data set and a Columbia data set, each tampering image data set comprises a tampering image and a tampering label image, the tampering image data set and the tampering label image are respectively placed in two folders, an original image folder is established for the data set with the original image, and the original image which is not tampered is stored. The tampered image is shown in fig. 2, the tampered label image is shown in fig. 3, the white portion indicates a tampered region, and the black portion indicates a non-tampered region.
The 3 data sets are subjected to image enhancement processing through a python script, firstly, a tampered area in the tampered image is extracted, and then the tampered area is scaled, rotated and pasted to an original image which is not tampered. The multi-scale attention convolutional neural network is trained by taking 80% of the respective data sets as training sets, and 20% of the data sets are used as test sets for testing the detection precision of the multi-scale attention convolutional neural network. And obtaining the performances of the model in different data sets through training and testing.
And running a program on the PyCharm, inputting the training set picture and the corresponding label into a multi-scale attention convolution neural network, and obtaining a final trained model after 150 times of iterative training.
The multi-scale integrated attention convolution network comprises a U-net network framework, multi-scale feature map processing, a position and a channel attention mechanism;
the U-net network framework performs 5 times of downsampling on an input 256×256 image to obtain 1024 8×8 feature maps; then, up-sampling is performed 5 more times, and an image of 256×256 original size is output. The specific operation is as follows: in the basic U-net framework, an input image is firstly subjected to coding processing, then is subjected to downsampling by 2 convolution kernels with 3 multiplied by 3, and is subjected to downsampling by one convolution kernel with the step length of 4 multiplied by 4 being 2, so that a characteristic diagram of 64 channels is obtained, and the next layer is provided with a characteristic diagram of 128 channels; in the next three layers, the input feature map is firstly segmented into a plurality of blocks, and the blocks are respectively sent to an attention mechanism. Then the operations of rolling and downsampling are performed. After undergoing a 5-layer operation of the encoder, a decoding operation is performed. The decoder of the network consists of four pairs of up-sampling layers and convolutional layers. The upsampling layer upsamples the feature map with bilinear interpolation. After the upsampling layer, the feature map doubles in width and height. The upsampling layer is followed by a convolution layer with a convolution kernel size of 1 x 1 and a stride of 1, and then a BN layer and a ReLU layer process the upsampled feature map. To compensate for the feature loss caused by upsampling, a jump connection is established between the same level of the encoder and the decoder, the feature map of the encoder is cascaded behind the feature map of the decoder, the number of channels is doubled, and the loss of decoder information is compensated.
When the multi-scale feature map is processed and the image high-level semantics are processed by the deep layer of the encoder, the feature map is processed in a multi-scale mode, and the feature map is split, so that a small-sized falsification target is highlighted, and the detection effect is improved. The specific operation is as follows: the feature map is divided using an odd number of sequences to obtain tiles, the third layer is 5×5, the fourth layer is 3×3, and the fifth layer is 1×1. By dividing the image blocks, the image blocks with small dimensions highlight each part of the feature map, so that the tamper target with small dimensions is detected.
The position and channel attention mechanism expands the difference between the tampered area and the non-tampered area in a self-attention computing mode, and improves the detection effect. Specifically, the method is designed from the two angles of a channel and a position, and the difference between the tampered area and the non-tampered area is enlarged through a self-attention algorithm, so that the tampered area is better obtained.
The specific training process is as follows:
(1) The data-enhanced tampered image dataset is input into a multi-scale integrated attention convolution network.
(2) The first two layers of the encoder are processed as follows:
for a convolutional neural network, an input image passes through a plurality of components consisting of a convolutional layer, a normalization layer and an activation layer, and 1024 feature maps are finally obtained.
The convolution layer uses k convolution check images to carry out convolution to generate k new feature images; the j-th output feature map for the n-th layerThe specific operation is as follows: />Wherein (1)>And->Respectively representing convolution kernel and bias,>for inputting the feature map->Outputting a characteristic diagram;
a batch normalization layer is adopted to change data into a distribution with a mean value of 0 and a variance of 1; realizing data batch b= { x by super parameter { α, β } 1 ,…,x i ,…,x n Each x in } i Conversion to y i Calculated update of x i Representing the channel of the ith layer input from the previous layer, y i Data representing the ith layer channel after batch normalization operation;wherein mu B Andrespectively mean and variance in lot B, epsilon being the small positive number used to avoid divisor 0;
the normalization layer is followed by an activation layer that converts the input feature map by a nonlinear map, uses a Relu activation function, and activates the data result f (z i )=max(0,z i ),z i Is the result of the convolution operation, if z i <0,f(z i ) =0, if z i >0,f(z i )=z i
(3) The last three layers of the encoder are processed as follows:
input F k,x Via location and channel injectionAnd carrying out parallel processing on the Italian mechanism, and carrying out fusion to obtain a complete result, wherein k=3, 4 and 5 represents the number of layers: first input F k,x Divided into 5 x 5 sub-areas f k,x The method comprises the steps of carrying out a first treatment on the surface of the For each sub-region, calculating the input position attention mechanism of each sub-region to obtain a sub-block f k,y Each sub-block f k,y And then the original size is combined again to obtain F k,y The method comprises the steps of carrying out a first treatment on the surface of the Then input channel attention mechanism, F k,x The input channel self-attention is not segmented any more, and the channel self-attention calculation is directly carried out to obtain an output F k,z The method comprises the steps of carrying out a first treatment on the surface of the Finally f k,y And F k,z And adding to obtain a 1024-layer feature vector.
(4) Decoding the feature map with the global features through a decoder to obtain a final prediction result; the whole convolution network minimizes the cross entropy loss function through a random gradient descent optimization algorithm, and the cross entropy loss functionp (w,h) Is the probability, y, of pixel pairings at coordinates (w, h) in the stitching region (w,h) Is the label of the true value mask for the current pixel, w=1, …, W, h=1, …, H, W is the width of the image, and H is the height of the image.
The trained models are used to test the images in the test set and calculate the accuracy of the detection from the corresponding tampered region labels. Specifically, the tampered image and the corresponding tampered area are respectively input, and after the detection of the trained model, a detection result is obtained, and as shown in fig. 4, the predicted tampered area is marked as white, and the non-tampered area is marked as black.
Therefore, the multi-scale integrated attention convolution neural network can effectively detect the tampered area of the tampered image after training. The accuracy of the CASIA data set IOU can reach 60.8%, and the accuracy of the Columbia data set IOU can reach 91%.
The above examples should be understood as illustrative only and not limiting the scope of the invention. Any simple modification, equivalent variation and modification of the above embodiments according to the technical substance of the present invention without departing from the technical solution of the present invention still falls within the scope of the technical solution of the present invention.

Claims (5)

1. The image tampering detection method based on the convolutional neural network is characterized by comprising the following steps of:
the method comprises the following steps of (1) collecting and sorting a disclosed image falsification data set; searching tampered image data sets in the tampered detection field through an open source community, wherein each tampered image data set comprises a tampered image and a tampered label image, and respectively placing the tampered image data sets in two folders;
step (2) carrying out data enhancement on the collected tampered image data set through a python script; firstly, extracting a tampered region in a tampered image, then scaling the tampered region, rotating and pasting the tampered region to an original image which is not tampered;
step (3), inputting the tampered image data set with the enhanced data into a multi-scale integrated attention convolution network for training a network model;
the multi-scale integrated attention convolution network comprises a U-net network framework, multi-scale feature map processing, a position and a channel attention mechanism;
the specific training process of the multi-scale integrated attention convolution network is as follows:
(1) inputting the tampered image data set after data enhancement into a multi-scale integrated attention convolution network;
(2) the first two layers of the encoder are processed as follows:
for a convolutional neural network, an input image passes through a plurality of assemblies formed by a convolutional layer, a normalization layer and an activation layer to finally obtain 1024 feature images;
the convolution layer uses k convolution check images to carry out convolution to generate k new feature images; the j-th output feature map for the n-th layerThe specific operation is as follows: />Wherein (1)>And->The convolution kernel and the offset are represented separately,for inputting the feature map->Outputting a characteristic diagram;
a batch normalization layer is adopted to change data into a distribution with a mean value of 0 and a variance of 1; realizing data batch b= { x by super parameter { α, β } 1 ,…,x i ,…,x n Each x in } i Conversion to y i Calculated update of x i Representing the channel of the ith layer input from the previous layer, y i Data representing the ith layer channel after batch normalization operation;wherein mu B And->Respectively mean and variance in lot B, epsilon being the small positive number used to avoid divisor 0;
the normalization layer is followed by an activation layer that converts the input feature map by a nonlinear map, uses a Relu activation function, and activates the data result f (z i )=max(0,z i ),z i Is the result of the convolution operation, if z i <0,f(z i ) =0, if z i >0,f(z i )=z i
(3) The last three layers of the encoder are processed as follows:
input F k,x Parallel processing is carried out through a position and channel attention mechanism, and fusion is carried out to obtain a complete result, wherein k=3, 4 and 5 represents the number of layers: first input F k,x Divided into 5 x 5 sub-areas f k,x The method comprises the steps of carrying out a first treatment on the surface of the For each sub-region, calculating the input position attention mechanism of each sub-region to obtain a sub-block f k,y Each sub-block f k,y And then the original size is combined again to obtain F k,y The method comprises the steps of carrying out a first treatment on the surface of the Then input channel attention mechanism, F k,x The input channel self-attention is not segmented any more, and the channel self-attention calculation is directly carried out to obtain an output F k,z The method comprises the steps of carrying out a first treatment on the surface of the Finally f k,y And F k,z Adding to obtain a 1024-layer feature vector;
(4) decoding the feature map with the global features through a decoder to obtain a final prediction result; the whole convolution network minimizes the cross entropy loss function through a random gradient descent optimization algorithm, and the cross entropy loss functionp (w,h) Is the probability, y, of pixel pairings at coordinates (w, h) in the stitching region (w,h) The label of the true value mask of the current pixel, w=1, …, W, h=1, …, H, W is the width of the image, H is the height of the image;
and (4) testing the input image by using the trained model to obtain a final falsified positioning result.
2. The method for detecting image tampering based on convolutional neural network as defined in claim 1, wherein: the tampered image data set includes a CASIA data set, a Columbia data set, and a NIST16 data set.
3. The method for detecting image tampering based on convolutional neural network as defined in claim 1, wherein: the U-net network framework performs 5 times of downsampling on an input 256×256 image to obtain 1024 8×8 feature maps; then up-sampling is carried out for 5 times, and an image with the original size of 256 multiplied by 256 is output; the specific operation is as follows: in the basic U-net framework, an input image is firstly subjected to coding processing, then is subjected to downsampling by 2 convolution kernels with 3 multiplied by 3, and is subjected to downsampling by one convolution kernel with the step length of 4 multiplied by 4 being 2, so that a characteristic diagram of 64 channels is obtained, and the next layer is provided with a characteristic diagram of 128 channels; in the next three layers, firstly dividing an input characteristic diagram into a plurality of image blocks, and respectively sending the image blocks into an attention mechanism; then performing operations of rolling and downsampling; after undergoing the 5-layer operation of the encoder, performing a decoding operation; the decoder of the network consists of four pairs of up-sampling layers and convolution layers; the up-sampling layer up-samples the feature map by bilinear interpolation; after the up-sampling layer, the width and the height of the feature map are doubled; the up-sampling layer is followed by a convolution layer with a convolution kernel size of 1×1 and a stride of 1, and then a BN layer and a ReLU layer process the up-sampled feature map; a jump connection is established between the same levels of the encoder and decoder, concatenating the feature map of the encoder behind the feature map of the decoder.
4. The method for detecting image tampering based on convolutional neural network as defined in claim 1, wherein: the multi-scale characteristic map processing comprises the following specific operations: the feature map is divided using an odd number of sequences to obtain tiles, the third layer is 5×5, the fourth layer is 3×3, and the fifth layer is 1×1.
5. The method for detecting image tampering based on convolutional neural network as defined in claim 1, wherein: the position and the channel attention mechanism are designed from two angles of the channel and the position, and the difference between the tampered area and the non-tampered area is enlarged through a self-attention algorithm.
CN202211624510.7A 2022-12-16 2022-12-16 Image tampering detection method based on convolutional neural network Pending CN116883805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211624510.7A CN116883805A (en) 2022-12-16 2022-12-16 Image tampering detection method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211624510.7A CN116883805A (en) 2022-12-16 2022-12-16 Image tampering detection method based on convolutional neural network

Publications (1)

Publication Number Publication Date
CN116883805A true CN116883805A (en) 2023-10-13

Family

ID=88259209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211624510.7A Pending CN116883805A (en) 2022-12-16 2022-12-16 Image tampering detection method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN116883805A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237787A (en) * 2023-11-14 2023-12-15 南京信息工程大学 Passive tampering detection method based on double-layer reinforced network
CN117558011A (en) * 2024-01-08 2024-02-13 暨南大学 Image text tampering detection method based on self-consistency matrix and multi-scale loss
CN117727104A (en) * 2024-02-18 2024-03-19 厦门瑞为信息技术有限公司 Near infrared living body detection device and method based on bilateral attention

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237787A (en) * 2023-11-14 2023-12-15 南京信息工程大学 Passive tampering detection method based on double-layer reinforced network
CN117237787B (en) * 2023-11-14 2024-02-06 南京信息工程大学 Passive tampering detection method based on double-layer reinforced network
CN117558011A (en) * 2024-01-08 2024-02-13 暨南大学 Image text tampering detection method based on self-consistency matrix and multi-scale loss
CN117558011B (en) * 2024-01-08 2024-04-26 暨南大学 Image text tampering detection method based on self-consistency matrix and multi-scale loss
CN117727104A (en) * 2024-02-18 2024-03-19 厦门瑞为信息技术有限公司 Near infrared living body detection device and method based on bilateral attention
CN117727104B (en) * 2024-02-18 2024-05-07 厦门瑞为信息技术有限公司 Near infrared living body detection device and method based on bilateral attention

Similar Documents

Publication Publication Date Title
Laroca et al. Convolutional neural networks for automatic meter reading
Alam et al. Intelligent system for vehicles number plate detection and recognition using convolutional neural networks
Saberironaghi et al. Defect detection methods for industrial products using deep learning techniques: A review
Li et al. Image splicing detection based on Markov features in QDCT domain
CN116883805A (en) Image tampering detection method based on convolutional neural network
Ali et al. Image forgery detection using deep learning by recompressing images
Uliyan et al. Image region duplication forgery detection based on angular radial partitioning and Harris key-points
Wang et al. Image inpainting detection based on multi-task deep learning network
Cao et al. Pedestrian detection algorithm for intelligent vehicles in complex scenarios
Yang et al. Convolutional neural network for smooth filtering detection
Chen et al. RBPNET: An asymptotic Residual Back-Projection Network for super-resolution of very low-resolution face image
Lim et al. Recent advances in traffic sign recognition: approaches and datasets
Jin et al. Vehicle license plate recognition for fog‐haze environments
Qin et al. Face inpainting network for large missing regions based on weighted facial similarity
Vijayalakshmi K et al. Copy-paste forgery detection using deep learning with error level analysis
Zhu et al. Super-resolved image perceptual quality improvement via multifeature discriminators
Ma et al. Infrared dim and small target detection based on background prediction
Huang et al. Semantic segmentation under a complex background for machine vision detection based on modified UPerNet with component analysis modules
Xiao et al. Image Inpainting Detection Based on High-Pass Filter Attention Network.
CN102999763B (en) Based on the top-down vision significance extracting method of scale selection
Kim et al. A photo identification framework to prevent copyright infringement with manipulations
Liu et al. Identification of serial number on bank card using recurrent neural network
CN111931689B (en) Method for extracting video satellite data identification features on line
Khan et al. A smart and robust automatic inspection of printed labels using an image hashing technique
Li et al. V-ShadowGAN: generative adversarial networks for removing and generating shadows associated with vehicles based on unpaired data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination