CN112381775A - Image tampering detection method, terminal device and storage medium - Google Patents

Image tampering detection method, terminal device and storage medium Download PDF

Info

Publication number
CN112381775A
CN112381775A CN202011230556.1A CN202011230556A CN112381775A CN 112381775 A CN112381775 A CN 112381775A CN 202011230556 A CN202011230556 A CN 202011230556A CN 112381775 A CN112381775 A CN 112381775A
Authority
CN
China
Prior art keywords
image
tampered
layer
tampering detection
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011230556.1A
Other languages
Chinese (zh)
Other versions
CN112381775B (en
Inventor
林燕语
张光斌
高志鹏
尤俊生
赵建强
杜新胜
张辉极
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meiya Pico Information Co Ltd
Original Assignee
Xiamen Meiya Pico Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meiya Pico Information Co Ltd filed Critical Xiamen Meiya Pico Information Co Ltd
Priority to CN202011230556.1A priority Critical patent/CN112381775B/en
Publication of CN112381775A publication Critical patent/CN112381775A/en
Application granted granted Critical
Publication of CN112381775B publication Critical patent/CN112381775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention relates to an image tampering detection method, a terminal device and a storage medium, wherein the method comprises the following steps: s1: acquiring an untampered image and a corresponding tampered image, subtracting the tampered image from the untampered image to obtain a difference image of the tampered image, and combining the untampered image and the tampered image together to form a training set; s2: constructing a two-classification network model, and training the two-classification network model through a training set, so that the trained two-classification network model can distinguish whether the image is tampered; the second classification network model comprises a feature extraction layer, an image attention layer and a classifier; s3: and identifying whether the image is tampered through the trained two-classification network model. The invention takes the difference image as a real label. The real label is used as tampering supervision information, the neural network is used for training the binary classification network, the network is guided to obtain accurate tampering detection probability value and a forged positioning map, and the performance of identifying and classifying face tampering images can be effectively improved.

Description

Image tampering detection method, terminal device and storage medium
Technical Field
The present invention relates to the field of image monitoring, and in particular, to an image tampering detection method, a terminal device, and a storage medium.
Background
With the progress of image editing technology and user-friendly editing software, a low-cost tampering phenomenon for an image generation process has been ubiquitous. However, these advances have also caused real false content to be generated, which has a significant impact on human life. They can be used to generate fake faces and spread across ubiquitous social networks, and even more, to crack face recognition systems, posing a serious safety hazard. Therefore, it is crucial to develop methods that can reliably detect such operations, and in recent years, there has been increasing interest in this goal.
The face tampering detection method mainly comprises two methods: traditional-based methods and deep learning-based methods. Based on the traditional method, manual characteristics are designed according to a certain type of tampering method, and the method is effective for a specific tampering method, such as a CFA difference algorithm, a pattern noise estimation algorithm, a JPEG quantization table algorithm and the like; the mode based on deep learning is to take a face falsification image as a target detection problem and an abnormal detection problem, and a deep learning model is used for automatically extracting effective characteristics to carry out falsification detection, for example, the mode is based on multiple models such as constraint convolution, multi-scale and GAN.
Although the current tamper detection methods based on the traditional method and the deep learning method can achieve a certain detection effect, the current tamper detection methods have defects. Most of the traditional methods are designed for a certain tampering method, so that the robustness and the expandability are poor, and the traditional methods are not suitable for various tampering detection. Most methods based on deep learning are tamper detection, the positioning capability is weak or not, and the accuracy is limited to a certain extent.
Disclosure of Invention
In order to solve the above problems, the present invention provides an image tampering detection method, a terminal device, and a storage medium.
The specific scheme is as follows:
an image tampering detection method, comprising the steps of:
s1: acquiring an untampered image and a corresponding tampered image, subtracting the tampered image from the untampered image to obtain a difference image of the tampered image, and combining the untampered image and the tampered image together to form a training set;
s2: constructing a two-classification network model, and training the two-classification network model through a training set, so that the trained two-classification network model can distinguish whether the image is tampered;
the second classification network model comprises a feature extraction layer, an image attention layer and a classifier;
the feature extraction layer consists of a double-current network and is respectively an RGB (red, green and blue) stream convolution layer and an SRM (sequence repeat request) stream convolution layer, the RGB stream convolution layer carries out feature extraction by inputting an RGB (red, green and blue) image, the SRM stream convolution layer obtains a noise image after passing through an SRM filter layer, and the noise image is input into the convolution layer to carry out feature extraction;
the image attention layer splices the features extracted by the RGB stream convolution layer and the SRM stream convolution layer and then takes the spliced features as input, obtains a difference image by operating the input feature image, and takes the difference image and the input feature image as the output of the image attention layer after being fused;
the classifier classifies the output of the image attention layer and judges whether the output is a tampered image or not;
s3: and identifying whether the image is tampered through the trained two-classification network model.
Further, the difference image in step S1 is an image obtained by performing normalization processing on the difference value between the non-tampered image and the tampered image.
Further, the image forming the training set in step S1 further includes the following operations: and acquiring the face in the non-tampered image by a face detection algorithm, then intercepting the face in the tampered image by using the position of the face region corresponding to the acquired face, and forming the intercepted face into a training set.
Further, the RGB stream convolution layer and the SRM stream convolution layer are respectively subjected to feature extraction through the first three layers of structures of the residual error network.
Further, the image attention layer operates on the input feature image by operating on the input through two convolution layers and a sigmoid function.
Furthermore, the last two layers of convolution of the residual error network are adopted in the classifier to carry out feature extraction on the input, and then a pooling layer and three full-connection layers are adopted to carry out two-classification training.
Further, the loss function L of the model is:
L=Lcls+α*Lmap
wherein L isclsRepresenting a two-class cross-entropy loss function for the classifier, alpha representing the weight of the loss function, LmapIndicating the L1 loss function for the image attention layer.
An image tampering detection terminal device comprises a processor, a memory and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the method of the embodiment of the invention.
A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method as described above for an embodiment of the invention.
The technical scheme is adopted, the defects of face image tampering detection and positioning technology are overcome, and the difference image is used as a real label (GT). The real label GT is used as tampering supervision information, the neural network is used for training the binary classification network, the network is guided to obtain accurate tampering detection probability value and forged positioning map, and the performance of face tampering image identification classification can be effectively improved.
Drawings
Fig. 1 is a flowchart illustrating a first embodiment of the present invention.
Fig. 2 is a schematic diagram of a filter used in the SRM stream convolution layer in this embodiment.
Detailed Description
To further illustrate the various embodiments, the invention provides the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the embodiments. Those skilled in the art will appreciate still other possible embodiments and advantages of the present invention with reference to these figures.
The invention will now be further described with reference to the accompanying drawings and detailed description.
The first embodiment is as follows:
an embodiment of the present invention provides an image tampering detection method, as shown in fig. 1, the method includes the following steps:
s1: acquiring an untampered image and a corresponding tampered image, subtracting the tampered image from the untampered image to obtain a difference image of the tampered image, and combining the untampered image and the tampered image together to form a training set.
In order to improve the generalization capability of the model and enable the model to learn discriminant features which can identify an untampered image and a tampered image better, in the embodiment, the tampered image uses multiple tampering modes, such as face attribute operation, copy-paste, face synthesis, identity transformation and the like, so that the training set comprises multiple different types of tampered images.
The difference image in this embodiment is generated specifically in the following manner:
and acquiring the face in the non-tampered image by using a face detection algorithm, and then intercepting the face in the tampered image by using the position of the face region corresponding to the acquired face. The purpose of this is to ensure consistency of the non-tampered image with the face position in the tampered image. And (3) subtracting the tampered image from the non-tampered image, obtaining a difference image according to the absolute value of the difference value, and normalizing the difference image to be in the range of [0,1 ].
In order to reduce the influence caused by image sampling, in this embodiment, a pixel value threshold value is set to 0.1, a pixel value greater than the pixel value threshold value is set to 1, and a pixel value less than or equal to the pixel value threshold value is set to 0.
The calculation formula of the difference image is as follows:
Xgt=||X-Xtp||/255.0 (1)
wherein X represents an untampered image, XtpIndicating tampering with the image, XgtRepresenting the normalized difference image.
S2: and constructing a two-classification network model, and training the two-classification network model through a training set, so that the trained two-classification network model can distinguish whether the image is tampered.
The two-classification network model comprises a feature extraction layer, an image attention layer and a classifier.
(1) Feature extraction layer
In this embodiment, the feature extraction layer has a double-flow network structure, and is composed of two network flows: RGB stream convolution layers and Steganalysis Rich Model (SRM) stream convolution layers. The RGB stream convolution layer carries out feature extraction by inputting RGB images, the SRM stream convolution layer obtains noise images after passing through an SRM filter layer, and the noise images are input into the convolution layer for feature extraction. The convolution layers of the RGB stream convolution layer and the SRM stream convolution layer are both of the first three-layer structure of the residual error network, and feature extraction is carried out on the RGB stream convolution layer and the SRM stream convolution layer through the first three-layer structure of the residual error network. The SRM stream convolution layer may utilize the local noise distribution of the image to provide additional evidence, since the noise distribution of a tampered image is not consistent with that of an untampered image.
The filter used by the SRM stream convolution layer is shown in fig. 2, which is a filter commonly used and effective in noise analysis, and is composed of three convolution filters, and the SRM convolution layer is constructed by converting the three convolution filters into convolution kernels and then adjusting the weight of each convolution kernel without back-propagating the SRM convolution, and the formula is as follows:
XSRM=WSRM*X (2)
wherein the content of the first and second substances,
Figure BDA0002765052210000061
representing the parameters of the SRM convolution kernel,
Figure BDA0002765052210000062
which represents the input image, is,
Figure BDA0002765052210000063
representing the image generated after the SRM filter process.
The SRM flow convolution layer can extract the noise with inconsistent distribution in the tampered image, so that the difference between the untampered image and the tampered image can be captured by a network, and the untampered image and the tampered image can be better detected.
The characteristics of the RGB stream convolution layer and the SRM stream convolution layer of the double-stream network are spliced to be used as the input of the image attention layer, and the used formula is as follows:
yout=concat(yfRGB,yfSRM) (3)
wherein, yfRGBAnd yfSRMRepresenting the characteristic values obtained through the first three layers of the residual error network; concat denotes the splicing operation, will yfRGBAnd yfSRMThe splicing is performed on the channels, which can be expressed as N (k)1+k2) H W, N represents the batch size, k1、k2Respectively represent yfRGBAnd yfSRMH and W denote the width and height of the current profile.
(2) Image attention layer
As attention is sought to highlight tampered image areas and to direct the network to detect these areas. Thus, the attention-based layer can be applied to any feature mapping of the classification model and focus the attention of the network on the differentiated regions.
In this embodiment, the input to the image attention layer is the convolved feature map of the stitched feature image
Figure BDA0002765052210000064
Where H represents the height, W represents the width, and C represents the number of channels. Then, two convolution layers and a sigmoid function are used for calculating the input, the difference image is solved, and the calculation formula is as follows:
Figure BDA0002765052210000065
wherein the content of the first and second substances,
Figure BDA0002765052210000071
representing a convolution operation.
Figure BDA0002765052210000072
Representing a network generated difference image.
Note that the intensity of each pixel in the force diagram is close to 0 for real areas and close to 1 for tampered areas. It is understood that the pixels of the attention map represent the probability that a pixel in the image is a tampered region. Therefore, the difference image is added into the network characteristics, and the image is more beneficial to distinguishing the tampered image. Therefore, in this embodiment, the difference image and the input feature image are fused and output, and the fusion formula is as follows:
F′=Xin⊙Mgr (5)
wherein, l represents a multiplication by element;
Figure BDA0002765052210000075
indicating the enhanced features.
(3) Classifier
The feature mapping of the two-classification network model is obtained through the image attention layer, the area which can highlight and display the tampered image and influences the decision of the convolutional network is learned, the network is guided to learn the discriminant features, and the classification capability of the network is improved. In the embodiment, the last two layers of convolution of the residual error network are adopted to extract the characteristics of the fusion image attention layer, and then a pooling layer and three full-connection layers are adopted to perform two-classification training to judge whether the image is a tampered image.
(4) Loss function
To train the two-class network, the loss function L of the model in this embodiment is:
L=Lcls+α*Lmap (6)
wherein L isclsRepresenting a two-class cross-entropy loss function for training of a classifier; alpha represents the weight of the loss function and is used for balancing the weight of the two loss functions; l ismapRepresenting a loss function used for training the difference images of the image attention layer, wherein the difference between the difference images obtained by the image attention layer operation and used as labels and the difference images corresponding to the input images of the two-classification network model and used as real labels GT is minimum through the loss function, and if the images input into the two-classification network model in the training process are non-falsified images, the difference images are all images with the value of 0, namely all black images; if the image of the input binary network model in the training process is a tampered image, the difference image is the difference image calculated in step S1. In this example LmapThe L1 loss function is adopted, and the specific formula is as follows:
Figure BDA0002765052210000081
wherein M isgtRepresenting a difference image obtained by network training,
Figure BDA0002765052210000082
representing true difference images while
Figure BDA0002765052210000083
Is to scale X by a scaling operationgtScaled to heel MgtThe same width and height.
Through the loss function, a two-classification network which detects out the real label GT can be trained, a tampered area can be distinguished, and the accuracy of two classifications can be improved.
S3: and identifying whether the image is tampered through the trained two-classification network model.
The embodiment of the invention can judge various tampering operations, such as splicing, copying-pasting, deleting and other tampering types. By designing an attention layer, the tampered area is located to obtain the real label GT. And fusing the real label GT into the characteristics as the input of the two classifiers, and guiding the network to detect the areas so as to further improve the performance of the two classifiers. A brand new idea is provided for the face image tampering detection.
Example two:
the invention further provides an image tampering detection terminal device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the method embodiment of the first embodiment of the invention.
Further, as an executable scheme, the image tampering detection terminal device may be a desktop computer, a notebook, a palm computer, a cloud server, and other computing devices. The image tampering detection terminal device can include, but is not limited to, a processor and a memory. It is understood by those skilled in the art that the above-mentioned constituent structure of the image tampering detection terminal device is only an example of the image tampering detection terminal device, and does not constitute a limitation to the image tampering detection terminal device, and may include more or less components than the above, or combine some components, or different components, for example, the image tampering detection terminal device may further include an input/output device, a network access device, a bus, and the like, which is not limited in this embodiment of the present invention.
Further, as an executable solution, the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, and the like. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, and the processor is a control center of the image tampering detection terminal device, and various interfaces and lines are used to connect various parts of the entire image tampering detection terminal device.
The memory may be configured to store the computer program and/or the module, and the processor may implement various functions of the image tamper detection terminal device by executing or executing the computer program and/or the module stored in the memory and calling data stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of the mobile phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The invention also provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned method of an embodiment of the invention.
The module/unit integrated with the image tampering detection terminal device may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. An image tampering detection method, characterized by comprising the steps of:
s1: acquiring an untampered image and a corresponding tampered image, subtracting the tampered image from the untampered image to obtain a difference image of the tampered image, and combining the untampered image and the tampered image together to form a training set;
s2: constructing a two-classification network model, and training the two-classification network model through a training set, so that the trained two-classification network model can distinguish whether the image is tampered;
the second classification network model comprises a feature extraction layer, an image attention layer and a classifier;
the feature extraction layer consists of a double-current network and is respectively an RGB (red, green and blue) stream convolution layer and an SRM (sequence repeat request) stream convolution layer, the RGB stream convolution layer carries out feature extraction by inputting an RGB (red, green and blue) image, the SRM stream convolution layer obtains a noise image after passing through an SRM filter layer, and the noise image is input into the convolution layer to carry out feature extraction;
the image attention layer splices the features extracted by the RGB stream convolution layer and the SRM stream convolution layer and then takes the spliced features as input, obtains a difference image by operating the input feature image, and takes the difference image and the input feature image as the output of the image attention layer after being fused;
the classifier classifies the output of the image attention layer and judges whether the output is a tampered image or not;
s3: and identifying whether the image is tampered through the trained two-classification network model.
2. The image tampering detection method according to claim 1, characterized in that: in step S1, the difference image is an image obtained by normalizing the difference value between the non-tampered image and the tampered image.
3. The image tampering detection method according to claim 1, characterized in that: the composing of the images in the training set in step S1 further includes the following operations: and acquiring the face in the non-tampered image by a face detection algorithm, then intercepting the face in the tampered image by using the position of the face region corresponding to the acquired face, and forming the intercepted face into a training set.
4. The image tampering detection method according to claim 1, characterized in that: and the RGB stream convolution layer and the SRM stream convolution layer are respectively subjected to feature extraction through the first three layers of structures of the residual error network.
5. The image tampering detection method according to claim 1, characterized in that: the image attention layer operates on the input feature image by operating on the input through two convolution layers and a sigmoid function.
6. The image tampering detection method according to claim 1, characterized in that: and in the classifier, the last two layers of convolution of the residual error network are adopted to carry out feature extraction on input, and then a pooling layer and three full-connection layers are adopted to carry out two-classification training.
7. The image tampering detection method according to claim 1, characterized in that: the loss function L of the model is:
L=Lcls+α*Lmap
wherein L isclsRepresenting a two-class cross-entropy loss function for a classifier, alpha represents a loss function weightHeavy, LmapIndicating the L1 loss function for the image attention layer.
8. An image tampering detection terminal device, characterized in that: comprising a processor, a memory and a computer program stored in said memory and running on said processor, said processor implementing the steps of the method according to any one of claims 1 to 7 when executing said computer program.
9. A computer-readable storage medium storing a computer program, characterized in that: the computer program when executed by a processor implementing the steps of the method as claimed in any one of claims 1 to 7.
CN202011230556.1A 2020-11-06 2020-11-06 Image tampering detection method, terminal device and storage medium Active CN112381775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011230556.1A CN112381775B (en) 2020-11-06 2020-11-06 Image tampering detection method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011230556.1A CN112381775B (en) 2020-11-06 2020-11-06 Image tampering detection method, terminal device and storage medium

Publications (2)

Publication Number Publication Date
CN112381775A true CN112381775A (en) 2021-02-19
CN112381775B CN112381775B (en) 2023-02-21

Family

ID=74578960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011230556.1A Active CN112381775B (en) 2020-11-06 2020-11-06 Image tampering detection method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN112381775B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950564A (en) * 2021-02-23 2021-06-11 北京三快在线科技有限公司 Image detection method and device, storage medium and electronic equipment
CN112949469A (en) * 2021-02-26 2021-06-11 中国科学院自动化研究所 Image recognition method, system and equipment for face tampered image characteristic distribution
CN113065592A (en) * 2021-03-31 2021-07-02 上海商汤智能科技有限公司 Image classification method and device, electronic equipment and storage medium
CN113129261A (en) * 2021-03-11 2021-07-16 重庆邮电大学 Image tampering detection method based on double-current convolutional neural network
CN113673568A (en) * 2021-07-19 2021-11-19 华南理工大学 Method, system, computer device and storage medium for detecting tampered image
CN113705397A (en) * 2021-08-16 2021-11-26 南京信息工程大学 Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation
CN113989245A (en) * 2021-10-28 2022-01-28 杭州中科睿鉴科技有限公司 Multi-view multi-scale image tampering detection method
CN115001937A (en) * 2022-04-11 2022-09-02 北京邮电大学 Fault prediction method and device for smart city Internet of things
CN115396237A (en) * 2022-10-27 2022-11-25 浙江鹏信信息科技股份有限公司 Webpage malicious tampering identification method and system and readable storage medium
CN115578631A (en) * 2022-11-15 2023-01-06 山东省人工智能研究院 Image tampering detection method based on multi-scale interaction and cross-feature contrast learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020150306A1 (en) * 2001-04-11 2002-10-17 Baron John M. Method and apparatus for the removal of flash artifacts
CN110378254A (en) * 2019-07-03 2019-10-25 中科软科技股份有限公司 Recognition methods, system, electronic equipment and the storage medium of vehicle damage amending image trace
CN110992238A (en) * 2019-12-06 2020-04-10 上海电力大学 Digital image tampering blind detection method based on dual-channel network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020150306A1 (en) * 2001-04-11 2002-10-17 Baron John M. Method and apparatus for the removal of flash artifacts
CN110378254A (en) * 2019-07-03 2019-10-25 中科软科技股份有限公司 Recognition methods, system, electronic equipment and the storage medium of vehicle damage amending image trace
CN110992238A (en) * 2019-12-06 2020-04-10 上海电力大学 Digital image tampering blind detection method based on dual-channel network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PENG ZHOU等: "Learning Rich Features for Image Manipulation Detection", 《IEEE XPLORE》 *
杨昌东等: "基于AT-PGGAN的增强数据车辆型号精细识别", 《中国图象图形学报》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950564A (en) * 2021-02-23 2021-06-11 北京三快在线科技有限公司 Image detection method and device, storage medium and electronic equipment
CN112950564B (en) * 2021-02-23 2022-04-01 北京三快在线科技有限公司 Image detection method and device, storage medium and electronic equipment
CN112949469A (en) * 2021-02-26 2021-06-11 中国科学院自动化研究所 Image recognition method, system and equipment for face tampered image characteristic distribution
CN113129261A (en) * 2021-03-11 2021-07-16 重庆邮电大学 Image tampering detection method based on double-current convolutional neural network
CN113065592A (en) * 2021-03-31 2021-07-02 上海商汤智能科技有限公司 Image classification method and device, electronic equipment and storage medium
CN113673568B (en) * 2021-07-19 2023-08-22 华南理工大学 Method, system, computer device and storage medium for detecting tampered image
CN113673568A (en) * 2021-07-19 2021-11-19 华南理工大学 Method, system, computer device and storage medium for detecting tampered image
CN113705397A (en) * 2021-08-16 2021-11-26 南京信息工程大学 Face detection method based on dual-flow CNN structure fusion PRNU (vertical false positive) GAN (generic inverse) generation
CN113989245A (en) * 2021-10-28 2022-01-28 杭州中科睿鉴科技有限公司 Multi-view multi-scale image tampering detection method
CN113989245B (en) * 2021-10-28 2023-01-24 杭州中科睿鉴科技有限公司 Multi-view multi-scale image tampering detection method
CN115001937B (en) * 2022-04-11 2023-06-16 北京邮电大学 Smart city Internet of things-oriented fault prediction method and device
CN115001937A (en) * 2022-04-11 2022-09-02 北京邮电大学 Fault prediction method and device for smart city Internet of things
CN115396237A (en) * 2022-10-27 2022-11-25 浙江鹏信信息科技股份有限公司 Webpage malicious tampering identification method and system and readable storage medium
CN115578631A (en) * 2022-11-15 2023-01-06 山东省人工智能研究院 Image tampering detection method based on multi-scale interaction and cross-feature contrast learning
CN115578631B (en) * 2022-11-15 2023-08-18 山东省人工智能研究院 Image tampering detection method based on multi-scale interaction and cross-feature contrast learning

Also Published As

Publication number Publication date
CN112381775B (en) 2023-02-21

Similar Documents

Publication Publication Date Title
CN112381775B (en) Image tampering detection method, terminal device and storage medium
KR101896357B1 (en) Method, device and program for detecting an object
Wang et al. An effective method for plate number recognition
Nandi et al. Traffic sign detection based on color segmentation of obscure image candidates: a comprehensive study
Hechri et al. Two‐stage traffic sign detection and recognition based on SVM and convolutional neural networks
Sheikh et al. Traffic sign detection and classification using colour feature and neural network
WO2020164278A1 (en) Image processing method and device, electronic equipment and readable storage medium
Vanetti et al. Gas meter reading from real world images using a multi-net system
CN108875727B (en) The detection method and device of graph-text identification, storage medium, processor
Wang et al. S 3 d: scalable pedestrian detection via score scale surface discrimination
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
He et al. Aggregating local context for accurate scene text detection
Lin et al. Convolutional neural networks for face anti-spoofing and liveness detection
CN114373185A (en) Bill image classification method and device, electronic device and storage medium
Velliangira et al. A novel forgery detection in image frames of the videos using enhanced convolutional neural network in face images
Isaac et al. Image forgery detection using region–based Rotation Invariant Co-occurrences among adjacent LBPs
Liu et al. Presentation attack detection for face in mobile phones
Mannan et al. Optimized segmentation and multiscale emphasized feature extraction for traffic sign detection and recognition
CN111340139A (en) Method and device for judging complexity of image content
CN111163332A (en) Video pornography detection method, terminal and medium
Ghandour et al. Building shadow detection based on multi-thresholding segmentation
Lafuente-Arroyo et al. Traffic sign classification invariant to rotations using support vector machines
Hassan et al. Facial image detection based on the Viola-Jones algorithm for gender recognition
Berbar Skin colour correction and faces detection techniques based on HSL and R colour components
Rusia et al. A Color-Texture-Based Deep Neural Network Technique to Detect Face Spoofing Attacks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant