CN113989234A - Image tampering detection method based on multi-feature fusion - Google Patents

Image tampering detection method based on multi-feature fusion Download PDF

Info

Publication number
CN113989234A
CN113989234A CN202111262826.1A CN202111262826A CN113989234A CN 113989234 A CN113989234 A CN 113989234A CN 202111262826 A CN202111262826 A CN 202111262826A CN 113989234 A CN113989234 A CN 113989234A
Authority
CN
China
Prior art keywords
image
feature
features
noise
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111262826.1A
Other languages
Chinese (zh)
Inventor
曹娟
赵昕颖
谢添
李锦涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhongke Ruijian Technology Co ltd
Original Assignee
Hangzhou Zhongke Ruijian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhongke Ruijian Technology Co ltd filed Critical Hangzhou Zhongke Ruijian Technology Co ltd
Priority to CN202111262826.1A priority Critical patent/CN113989234A/en
Publication of CN113989234A publication Critical patent/CN113989234A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image tampering detection method based on multi-feature fusion. The method is suitable for the field of image tampering detection. The technical scheme adopted by the invention is as follows: an image tampering detection method based on multi-feature fusion is characterized in that: acquiring an image to be detected; extracting low-layer features and high-layer features of an image to be detected through an RGB feature encoder; extracting the noise characteristics of the image to be detected through a noise characteristic encoder; and fusing the low-layer characteristic, the high-layer characteristic and the noise characteristic of the image to be detected by using a decoder, and segmenting the tampered region of the image. The method realizes the self-adaptive feature selection by carrying out the attention weighting of the feature map channel in the feature extraction stage, and efficiently fuses the RGB features and the noise features by using the attention module on the channel and the space dimension, thereby having stronger image tampering feature extraction capability and having obvious improvement effect on image tampering detection tasks.

Description

Image tampering detection method based on multi-feature fusion
Technical Field
The invention relates to an image tampering detection method based on multi-feature fusion. The method is suitable for the field of image tampering detection.
Background
With the rapid development of digital media technology in deep learning algorithms, the phenomenon of image tampering by using various image editing software and visual algorithms is increasingly common. The potential harm of malicious tampered images to the maintenance of social media credibility and social stability makes the rapid and accurate detection and positioning of the tampered areas in the images become an urgent problem to be solved.
The image tampering method mainly comprises three categories of splicing, copying and pasting and eliminating. Stitching is to stitch two or more images together and paste the content copied from one image to a different image. Copy-paste refers to copying a portion of an image and pasting the portion into the same image. Namely, the splicing and copy-and-paste methods are different in that the tampering operation occurs within the same picture or between multiple pictures. Elimination refers to removing a portion of an image and complementing the deleted portion with an image restoration method.
The existing image tampering detection methods at home and abroad can be roughly divided into two types: traditional methods based on manual features and deep learning methods based on adaptive feature extraction. Natural images are generally acquired by digital acquisition devices, such as cameras, and there are features introduced by the intrinsic characteristics of imaging devices, such as noise patterns, Color Filter Array (CFA), light intensity and Color tone consistency, and the like. The self-consistency of the features may be destroyed by the tampering operation, such as the inconsistency of the noise distribution caused by the tampering operation, the JPEG recompression characteristic introduced by the recompression after the image is tampered, and the like.
Common image tampering detection manual features include image noise distribution, JPEG recompression feature, CFA interpolation feature, and the like. In the method based on image noise distribution, a Steganalysis Rich filter (SRM) is often used to extract local noise distribution information of an image, and then whether a tampered region exists is determined through analysis of the noise information. The JPEG recompression characteristic-based method judges whether a tampered region exists by analyzing a discrete cosine transform coefficient (DCT) of an image. The image is generally restored after being tampered, and for the damaged JPEG format image, the image is subjected to double JPEG compression after being restored. Compared with single JPEG compression, the DCT coefficient of the image after recompression has a periodic rule, so that whether the image is recompressed or not is judged. Error Level Analysis (ELA) is also a commonly used tamper detection method, which analyzes the compression of lossy format images by detecting the Error distribution caused after redrawing the image at a certain compression ratio.
The traditional algorithm extracts corresponding manual features according to different tampering characteristics, and is usually only used for detecting a certain type of tampering technology. With the successful application of the deep learning method in the field of computer vision, the image tampering detection work based on the deep learning method is also remarkably advanced. Rao et al uses the weights of SRM filters to initialize the first layer of the network, extracts noise information and inputs the noise information into the network; in a similar way, a constraint convolution layer is designed by Bayar et al, a convolution kernel similar to a high-pass filter is adaptively learned through constraint on convolution kernel parameters, and semantic learning of an image is inhibited, so that more image tampering features are extracted. Salloum et al propose a Multitask Full Convolution Network (MFCN) that focuses on tampering with edge information to promote the detection of edge anomalies by the model. RGB-N proposed by Zhou et al [4] uses the fast R-CNN structure in target detection to construct a double-flow model to respectively extract image RGB channel characteristics and noise distribution characteristics to position a tampered region. Yang et al propose a Constrained R-CNN model, use the Constrained convolution layer, input fast R-CNN structure after extracting the tampering trace information, discern the tampered area. The ManTraNet model proposed by Wu et al uses a common convolution kernel, a constraint convolution kernel and an SRM filter to respectively extract and fuse the features of an image, and then inputs the feature extraction network and an anomaly detection network to capture local anomalies.
The manual feature extraction performed by the conventional method is only suitable for detection of a specific tampering technology, so that the method has great limitation in the case of unknown tampering technologies or various tampering technologies. The existing deep learning method is expected to realize the suppression of semantic features and the reinforcement of tamper trace features by introducing manual features such as noise features and the like, tampering region edge information and the like. However, the effect of the existing method in extracting and utilizing the tamper-related features still needs to be improved. The deep learning method mainly has two ideas when noise information is used, wherein firstly, the noise information is extracted and then used as the input of noise flow to carry out feature extraction, and a double-flow network model is formed by the extracted flow of RGB channel features; and the other is a single-flow network model which directly extracts the noise extraction module from the network head. Firstly, two methods of introducing noise information have different effects on detection results, and in experiments performed in the text, the effect of a double-flow model is superior to that of a single-flow model; second, when using a dual-flow model, different feature fusion strategies also have an impact on the model effect. Therefore, how to efficiently extract and utilize the characteristics related to image tampering is the bottleneck of further improving the effect in the existing image tampering detection method.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the existing problems, an image tampering detection method based on multi-feature fusion is provided.
The technical scheme adopted by the invention is as follows: an image tampering detection method based on multi-feature fusion is characterized in that:
acquiring an image to be detected;
extracting low-layer features and high-layer features of an image to be detected through an RGB feature encoder;
extracting the noise characteristics of the image to be detected through a noise characteristic encoder;
and fusing the low-layer characteristic, the high-layer characteristic and the noise characteristic of the image to be detected by using a decoder, and segmenting the tampered region of the image.
Through RGB feature encoder extraction low-level characteristic and high-level characteristic of waiting to detect the image, include:
the RGB feature encoder adopts an SE-ResNeXt structure; the low-level features are derived from intermediate result features after passing through a second module in SE-ResNeXt; the high-level features are obtained by upsampling the image to the same size as the low-level features through an SE-ResNeXt and spatial pooling pyramid structure.
The noise characteristic of waiting to detect the image is drawed through noise characteristic encoder includes:
and after image noise information is obtained by using the SRM filter, inputting SE-ResNeXt for feature extraction.
The method for fusing the low-layer feature, the high-layer feature and the noise feature of the image to be detected by using the decoder comprises the following steps:
and fusing low-level features, high-level features and noise features of the image to be detected by using a channel and an attention module in a spatial dimension.
An image tampering detection device based on multi-feature fusion is characterized in that:
the image acquisition module is used for acquiring an image to be detected;
the RGB feature extraction module is used for extracting the low-layer features and the high-layer features of the image to be detected through an RGB feature encoder;
the noise characteristic extraction module is used for extracting the noise characteristics of the image to be detected through a noise characteristic encoder;
and the characteristic fusion module is used for fusing the low-layer characteristic, the high-layer characteristic and the noise characteristic of the image to be detected by using the decoder and segmenting the tampered region of the image.
Through RGB feature encoder extraction low-level characteristic and high-level characteristic of waiting to detect the image, include:
the RGB feature encoder adopts an SE-ResNeXt structure; the low-level features are derived from intermediate result features after passing through a second module in SE-ResNeXt; the high-level features are obtained by upsampling the image to the same size as the low-level features through an SE-ResNeXt and spatial pooling pyramid structure.
The noise characteristic of waiting to detect the image is drawed through noise characteristic encoder includes:
and after image noise information is obtained by using the SRM filter, inputting SE-ResNeXt for feature extraction.
The method for fusing the low-layer feature, the high-layer feature and the noise feature of the image to be detected by using the decoder comprises the following steps:
and fusing low-level features, high-level features and noise features of the image to be detected by using a channel and an attention module in a spatial dimension.
A storage medium having stored thereon a computer program executable by a processor, the computer program comprising: the computer program when executed implements the steps of the multi-feature fusion based image tamper detection method.
An image tampering detection device based on multi-feature fusion, having a memory and a processor, the memory having stored thereon a computer program executable by the processor, characterized by: the computer program when executed implements the steps of the multi-feature fusion based image tamper detection method.
The invention has the beneficial effects that: the method realizes the self-adaptive feature selection by carrying out the attention weighting of the feature map channel in the feature extraction stage, and efficiently fuses the RGB features and the noise features by using the attention module on the channel and the space dimension, thereby having stronger image tampering feature extraction capability and having obvious improvement effect on image tampering detection tasks.
Drawings
Fig. 1 is a block diagram of the embodiment.
FIG. 2 is a schematic diagram of the residual structure mapping into Squeeze-and-Excitation module in the embodiment.
FIG. 3 is an example of the detection result of the embodiment.
Detailed Description
As shown in fig. 1, the present embodiment is an image tampering detection method based on multi-feature fusion, and includes a detection model obtained by improving an image segmentation model deeplavv 3+, the detection model is divided into three parts, namely an RGB feature encoder, a noise feature encoder, and a decoder, and the RGB feature encoder and the noise feature encoder constitute a dual-stream structure of the present model.
The image tampering detection method based on multi-feature fusion in this embodiment specifically includes the following steps:
s1, acquiring an image to be detected;
s2, extracting the low-layer features and the high-layer features of the image to be detected through an RGB feature encoder;
s3, extracting the noise characteristics of the image to be detected through a noise characteristic encoder;
and S4, fusing the low-layer feature, the high-layer feature and the noise feature of the image to be detected by using a decoder, and segmenting the tampered region of the image.
In the embodiment, the RGB feature encoder uses a ResNeXt structure added with an Squeeze-and-Excitation module, namely SE-ResNeXt, to realize the self-adaptive selection of features; and simultaneously, extracting multi-scale characteristic information by combining an empty space Pooling Pyramid structure (ASPP).
The SE-ResNeXt structure is added with a Squeze-and-excitation (SE) module to realize the importance measurement of each channel of the feature map when extracting the features. Specifically, the SE module is divided into an Squeeze operation for compressing from the channel dimension and an Excitation operation for learning and weighting each channel weight by the compressed features.
The Squeeze operation encodes the two-dimensional feature map of each channel into a single real feature with a global receptive field, thus obtaining a vector of length channel number. This operation is achieved by global average pooling. For a feature map X with size H × W × C, performing global average pooling on the feature map X to compress the feature map X to a feature representation z with size 1 × 1 × C, the corresponding calculation formula for each channel is:
Figure BDA0003326075930000061
after the characteristics obtained by the Squeeze operation are obtained, the specification operation is performed on the input by using a bottleneck structure consisting of two fully-connected layers to reduce the dimension and restore the dimension to the original dimension, and meanwhile, the global characteristic information of each channel is fused to learn the importance degree of the input. Defining r as the channel dimension reduction ratio, the input content will be reduced from 1 × 1 × C to 1 × 1 × C by the first fully connected layer in the specification operation
Figure BDA0003326075930000062
Then, the second full connection layer restores the original dimensionality and learns the corresponding weight of each channel
Figure BDA0003326075930000063
The Sigmoid function is represented by a,
Figure BDA0003326075930000064
and
Figure BDA0003326075930000065
representing the parameters of two fully connected layers, δ representing the ReLU activation function between the two fully connected layers, then s is calculated as: s ═ Fex(z,W)=σ(W2δ(W1z))。
Each residual error module of ResNeXt adds an SE module in a mode shown in fig. 2, and feature importance is weighted through modeling of feature graph channel importance, so that tampering detection effect is improved.
In addition, a cavity space pyramid pooling structure (ASPP) is used in the DeepLabv3+ structure, and the extraction of multi-scale feature information is realized by selecting cavity convolutions with different expansion rates. For the image tampering detection problem, the larger receptive field is beneficial to capturing global information and finding the difference of the characteristics of different areas in the image caused by tampering in the image, and the tampered area is often multi-scale. Thus, the use of multi-scale features facilitates image tamper detection efforts.
And extracting the RGB information of the image to obtain the low-layer features and the high-layer features of the image. The low-layer features come from intermediate result features after passing through a second module in the encoder SE-ResNeXt, have higher resolution, reserve more image detail information and are beneficial to improving the precision during segmentation; the high-level features are obtained by upsampling the image to the same size as the low-level features through a complete SE-ResNeXt encoder structure and a spatial pooling pyramid structure.
In this embodiment, the noise feature encoder first extracts tampering related information, i.e., local noise distribution information of the image, by using the SRM filter. The SRM filter was first applied in the task of image steganalysis, and consists of 3 filter kernels, which constitute a convolution layer of 5 × 5 × 3 size. The SRM filter models noise information by taking the residual between each pixel in the image and its actual value and the estimated value interpolated from its neighboring pixel values. After the image passes through an SRM filter, extracting image residual error information as a local noise characteristic diagram.
The SRM filter uses fixed convolution kernel parameters, belongs to a manual feature extraction method, and the parameters of 3 filtering kernels are as follows:
Figure BDA0003326075930000071
Figure BDA0003326075930000072
after the image noise information is obtained by the SRM filter, the image noise information is input into another SE-ResNeXt encoder for feature extraction, and the number of channels is adjusted through a 1 multiplied by 1 convolution layer to obtain the noise feature.
In this embodiment, a channel and attention module (scSE attention module) in a spatial dimension are used to fuse low-level features, high-level features and noise features of an image to be detected.
The SE module only considers the spatial dimension compression of the feature map, and the importance of channel dimension measurement, which is denoted as sSE (spatial Squeeze-and-Excitation); considering the importance of pixel-level spatial features to the image segmentation task, the input features are compressed in channel dimension in the same way, and the importance of each part of the spatial dimension is measured, which is denoted as cSE (channel Squeeze-and-Excitation), and sSE of the spatial dimension is combined with cSE of the channel dimension to obtain the scSE attention module.
In this embodiment, after the RGB channels of the image are obtained as the low-level feature, the high-level feature, and the noise feature, the channels are adjusted by using 1 × 1 convolutional layers, respectively. Since the low-level features and the noise features mainly assist the high-level features in segmenting the tampered region, the number of channels is adjusted to 48, and the number of channels of the high-level features is adjusted to 256. In the feature fusion stage, after low-layer features, high-layer features and noise features are fused by an original DeepLabv3+ fusion feature splicing (concatenate) method, an scSE module is used as an attention module, so that the purpose of weighting and adaptively selecting more effective features according to the importance of the features is achieved.
In the embodiment, a combination of three loss functions, namely a binary cross entropy loss function, a Dice loss function and a Lovasz loss function, is used during training. The cross entropy loss is suitable for most semantic segmentation scenes, and the image tampering detection is equivalent to a binary problem of whether each pixel is tampered, so that the binary cross entropy loss is adopted. The Dice loss function is widely applied to the field of medical image segmentation and is suitable for scenes with extreme imbalance of positive and negative samples. In medical image segmentation, the segmentation area is very small, and the problem also exists in the image tampering detection task. The purpose of using the Dice loss function is to overcome the defects caused by unbalance of positive and negative samples. The Lovasz loss function is a loss function that directly optimizes the cross-over ratio between the predicted result and the actual result.
The embodiment also provides an image tampering detection device based on multi-feature fusion, which comprises an image acquisition module, an RGB feature extraction module, a noise feature extraction module and a feature fusion module.
The image acquisition module is used for acquiring an image to be detected; the RGB feature extraction module is used for extracting the low-layer features and the high-layer features of the image to be detected through an RGB feature encoder; the noise characteristic extraction module is used for extracting the noise characteristics of the image to be detected through a noise characteristic encoder; the characteristic fusion module is used for fusing the low-layer characteristic, the high-layer characteristic and the noise characteristic of the image to be detected by using the decoder and segmenting the tampered region of the image.
The present embodiment also provides a storage medium having stored thereon a computer program executable by a processor, the computer program, when executed, implementing the steps of the multi-feature fusion based image tampering detection method in this example.
The embodiment also provides an image tampering detection device based on multi-feature fusion, which has a memory and a processor, wherein the memory stores a computer program capable of being executed by the processor, and the computer program realizes the steps of the image tampering detection method based on multi-feature fusion in the embodiment when executed.
Fig. 3 is an example of the detection result of the embodiment, and it can be known from fig. 3 that the embodiment can accurately detect and locate a tampered region in an image.

Claims (10)

1. An image tampering detection method based on multi-feature fusion is characterized in that:
acquiring an image to be detected;
extracting low-layer features and high-layer features of an image to be detected through an RGB feature encoder;
extracting the noise characteristics of the image to be detected through a noise characteristic encoder;
and fusing the low-layer characteristic, the high-layer characteristic and the noise characteristic of the image to be detected by using a decoder, and segmenting the tampered region of the image.
2. The image tampering detection method based on multi-feature fusion as claimed in claim 1, wherein said extracting the low-level features and the high-level features of the image to be detected by the RGB feature encoder comprises:
the RGB feature encoder adopts an SE-ResNeXt structure; the low-level features are derived from intermediate result features after passing through a second module in SE-ResNeXt; the high-level features are obtained by upsampling the image to the same size as the low-level features through an SE-ResNeXt and spatial pooling pyramid structure.
3. The image tampering detection method based on multi-feature fusion as claimed in claim 1, wherein the extracting the noise feature of the image to be detected by the noise feature encoder comprises:
and after image noise information is obtained by using the SRM filter, inputting SE-ResNeXt for feature extraction.
4. The image tampering detection method based on multi-feature fusion as claimed in claim 1, wherein the fusing the low-layer feature, the high-layer feature and the noise feature of the image to be detected by using the decoder comprises:
and fusing low-level features, high-level features and noise features of the image to be detected by using a channel and an attention module in a spatial dimension.
5. An image tampering detection device based on multi-feature fusion is characterized in that:
the image acquisition module is used for acquiring an image to be detected;
the RGB feature extraction module is used for extracting the low-layer features and the high-layer features of the image to be detected through an RGB feature encoder;
the noise characteristic extraction module is used for extracting the noise characteristics of the image to be detected through a noise characteristic encoder;
and the characteristic fusion module is used for fusing the low-layer characteristic, the high-layer characteristic and the noise characteristic of the image to be detected by using the decoder and segmenting the tampered region of the image.
6. The image tampering detection device based on multi-feature fusion as claimed in claim 5, wherein said extracting the low-level features and the high-level features of the image to be detected by the RGB feature encoder comprises:
the RGB feature encoder adopts an SE-ResNeXt structure; the low-level features are derived from intermediate result features after passing through a second module in SE-ResNeXt; the high-level features are obtained by upsampling the image to the same size as the low-level features through an SE-ResNeXt and spatial pooling pyramid structure.
7. The image tampering detection device based on multi-feature fusion as claimed in claim 5, wherein said extracting noise features of the image to be detected by the noise feature encoder comprises:
and after image noise information is obtained by using the SRM filter, inputting SE-ResNeXt for feature extraction.
8. The image tampering detection device based on multi-feature fusion according to claim 5, wherein said fusing the low-layer features, the high-layer features and the noise features of the image to be detected by using a decoder comprises:
and fusing low-level features, high-level features and noise features of the image to be detected by using a channel and an attention module in a spatial dimension.
9. A storage medium having stored thereon a computer program executable by a processor, the computer program comprising: the computer program when executed implements the steps of the multi-feature fusion based image tampering detection method of any of claims 1-4.
10. An image tampering detection device based on multi-feature fusion, having a memory and a processor, the memory having stored thereon a computer program executable by the processor, characterized by: the computer program when executed implements the steps of the multi-feature fusion based image tampering detection method of any of claims 1-4.
CN202111262826.1A 2021-10-28 2021-10-28 Image tampering detection method based on multi-feature fusion Pending CN113989234A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111262826.1A CN113989234A (en) 2021-10-28 2021-10-28 Image tampering detection method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111262826.1A CN113989234A (en) 2021-10-28 2021-10-28 Image tampering detection method based on multi-feature fusion

Publications (1)

Publication Number Publication Date
CN113989234A true CN113989234A (en) 2022-01-28

Family

ID=79743381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111262826.1A Pending CN113989234A (en) 2021-10-28 2021-10-28 Image tampering detection method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN113989234A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503721A (en) * 2023-06-29 2023-07-28 中邮消费金融有限公司 Method, device, equipment and storage medium for detecting tampering of identity card
CN116740015A (en) * 2023-06-12 2023-09-12 北京长木谷医疗科技股份有限公司 Medical image intelligent detection method and device based on deep learning and electronic equipment
CN117557562A (en) * 2024-01-11 2024-02-13 齐鲁工业大学(山东省科学院) Image tampering detection method and system based on double-flow network
CN117558011A (en) * 2024-01-08 2024-02-13 暨南大学 Image text tampering detection method based on self-consistency matrix and multi-scale loss

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349136A (en) * 2019-06-28 2019-10-18 厦门大学 A kind of tampered image detection method based on deep learning
CN111340784A (en) * 2020-02-25 2020-06-26 安徽大学 Image tampering detection method based on Mask R-CNN
CN112150450A (en) * 2020-09-29 2020-12-29 武汉大学 Image tampering detection method and device based on dual-channel U-Net model
CN112287940A (en) * 2020-10-30 2021-01-29 西安工程大学 Semantic segmentation method of attention mechanism based on deep learning
CN112329778A (en) * 2020-10-23 2021-02-05 湘潭大学 Semantic segmentation method for introducing feature cross attention mechanism

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349136A (en) * 2019-06-28 2019-10-18 厦门大学 A kind of tampered image detection method based on deep learning
CN111340784A (en) * 2020-02-25 2020-06-26 安徽大学 Image tampering detection method based on Mask R-CNN
CN112150450A (en) * 2020-09-29 2020-12-29 武汉大学 Image tampering detection method and device based on dual-channel U-Net model
CN112329778A (en) * 2020-10-23 2021-02-05 湘潭大学 Semantic segmentation method for introducing feature cross attention mechanism
CN112287940A (en) * 2020-10-30 2021-01-29 西安工程大学 Semantic segmentation method of attention mechanism based on deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740015A (en) * 2023-06-12 2023-09-12 北京长木谷医疗科技股份有限公司 Medical image intelligent detection method and device based on deep learning and electronic equipment
CN116503721A (en) * 2023-06-29 2023-07-28 中邮消费金融有限公司 Method, device, equipment and storage medium for detecting tampering of identity card
CN116503721B (en) * 2023-06-29 2023-10-13 中邮消费金融有限公司 Method, device, equipment and storage medium for detecting tampering of identity card
CN117558011A (en) * 2024-01-08 2024-02-13 暨南大学 Image text tampering detection method based on self-consistency matrix and multi-scale loss
CN117558011B (en) * 2024-01-08 2024-04-26 暨南大学 Image text tampering detection method based on self-consistency matrix and multi-scale loss
CN117557562A (en) * 2024-01-11 2024-02-13 齐鲁工业大学(山东省科学院) Image tampering detection method and system based on double-flow network
CN117557562B (en) * 2024-01-11 2024-03-22 齐鲁工业大学(山东省科学院) Image tampering detection method and system based on double-flow network

Similar Documents

Publication Publication Date Title
Novozamsky et al. IMD2020: A large-scale annotated dataset tailored for detecting manipulated images
CN113989234A (en) Image tampering detection method based on multi-feature fusion
CN111476737B (en) Image processing method, intelligent device and computer readable storage medium
Gallagher et al. Image authentication by detecting traces of demosaicing
Kwon et al. Learning jpeg compression artifacts for image manipulation detection and localization
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
WO2021164234A1 (en) Image processing method and image processing device
CN111209952A (en) Underwater target detection method based on improved SSD and transfer learning
EP2130175A2 (en) Edge mapping incorporating panchromatic pixels
Steffens et al. Cnn based image restoration: Adjusting ill-exposed srgb images in post-processing
CN115034982A (en) Underwater image enhancement method based on multi-scale attention mechanism fusion
CN112686869A (en) Cloth flaw detection method and device
CN115345866A (en) Method for extracting buildings from remote sensing images, electronic equipment and storage medium
Arulkumar et al. Super resolution and demosaicing based self learning adaptive dictionary image denoising framework
CN114581318B (en) Low-illumination image enhancement method and system
Khadidos et al. Bayer image demosaicking and denoising based on specialized networks using deep learning
Uchida et al. Pixelwise jpeg compression detection and quality factor estimation based on convolutional neural network
Wang et al. Image splicing localization based on re-demosaicing
US8655061B2 (en) Image processing apparatus and image processing method for performing a convolution operation
Novozámský et al. Extended IMD2020: a large‐scale annotated dataset tailored for detecting manipulated images
CN117058062B (en) Image quality improvement method based on layer-by-layer training pyramid network
Priyadharsini et al. Effective image splicing detection using deep neural network
Rush et al. Bayer feature map approximations through spatial pyramid convolution
CN117291802A (en) Image super-resolution reconstruction method and system based on composite network structure
CN114782706A (en) Moire pattern image identification method based on multi-stream feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Cao Juan

Inventor after: Zhao Xinying

Inventor after: Xie Tian

Inventor before: Cao Juan

Inventor before: Zhao Xinying

Inventor before: Xie Tian

Inventor before: Li Jintao

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220128