CN111160152A - Method for identifying whether face image is tampered - Google Patents

Method for identifying whether face image is tampered Download PDF

Info

Publication number
CN111160152A
CN111160152A CN201911296878.3A CN201911296878A CN111160152A CN 111160152 A CN111160152 A CN 111160152A CN 201911296878 A CN201911296878 A CN 201911296878A CN 111160152 A CN111160152 A CN 111160152A
Authority
CN
China
Prior art keywords
face
image
tampered
judgment
tampering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911296878.3A
Other languages
Chinese (zh)
Inventor
严国建
李志强
王彬
曾璐
杨阳
许璐
梁瑞凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN DAQIAN INFORMATION TECHNOLOGY CO LTD
Original Assignee
WUHAN DAQIAN INFORMATION TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN DAQIAN INFORMATION TECHNOLOGY CO LTD filed Critical WUHAN DAQIAN INFORMATION TECHNOLOGY CO LTD
Priority to CN201911296878.3A priority Critical patent/CN111160152A/en
Publication of CN111160152A publication Critical patent/CN111160152A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The invention relates to a method for identifying whether a face image is tampered, which comprises the following steps: acquiring coordinate information of all face frames in an image to be identified; processing the face frame to obtain a sectional drawing containing partial face peripheral background; and sending the matte to a pre-trained human face tampering judgment model based on a deep learning network for judgment, and outputting information whether the human face in the matte is tampered or not after judgment. By the method, the identification of the picture or the real-time video can help to attack the illegal behavior of invading other people, thereby being beneficial to protecting the privacy and the portrait right of the human face.

Description

Method for identifying whether face image is tampered
Technical Field
The invention relates to the technical field of image processing, in particular to a method for identifying whether a face image is tampered.
Background
With the rapid development of science and technology, the extensive introduction of the AI artificial intelligence technology becomes a hotspot of current science and technology research, and currently, some software or mobile phone APPs can replace the face of a person in a picture or a video with the face of another person through the AI artificial intelligence technology, so that the face changing effect is realized, and the purpose of entertainment is achieved.
The face changing effect is good, the facial expression is natural, the effect is very vivid, once the face changing device is used by some lawless persons, the face in the image or the video is changed, some potential safety hazards can be brought, for example, the portrait right, the reputation right and the like of the person are damaged, and great harm can be caused to people involved. In the prior art, the conventional method usually directly uses the conventional digital image processing technology and a manual combination mode to identify whether the face image is tampered, the position coordinates of the face are manually specified, and only the tampering in a specific mode can be identified. After the face is changed by the AI artificial intelligence technology, the effect of falseness can be achieved by utilizing the advantages of big data, and at the moment, the traditional method and naked eyes are difficult to distinguish whether the face is tampered.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for identifying whether a face image is tampered, and the method can identify pictures which can not be distinguished by naked eyes whether the face is tampered or not by data processing and adjustment of a face tampering judgment network.
The technical scheme adopted for realizing the aim of the invention is a method for identifying whether a face image is tampered, which comprises the following steps:
s1, acquiring coordinate information of all face frames in the image to be identified;
s2, processing the face frame to obtain a sectional drawing containing partial face surrounding background;
and S3, sending the matte to a pre-trained human face tampering judgment model based on a deep learning network for judgment, and outputting information whether the human face in the matte is tampered or not after judgment.
In the above technical solution, the pre-training of the face tampering determination model includes:
collecting m original video files and a video file subjected to face tampering in the original video files as initial samples, collecting n sample pictures by using the initial samples, and marking whether the face in the sample pictures is tampered or not;
and taking the initial sample and the labeling result as input, and training by adjusting the training parameters for multiple times to obtain a human face tampering judgment model.
Further, the trained face tampering judgment model is cut, and avx2 is used for acceleration optimization, so that the processing speed of the model can meet the requirement of processing the monitoring video in real time.
In the above technical solution, the step S1 includes: and detecting the image to be identified through a pre-trained face detection model, and detecting the coordinate information of a face frame in the image.
In the above technical solution, the step S2 includes: and carrying out outward expansion on the obtained face frame and eliminating the mutual influence under the dense face scene, and cutting the processed face frame to obtain a cutout which has no mutual influence and contains partial face peripheral background.
The method can quickly and accurately identify the image which is difficult to distinguish whether the human face is tampered by naked eyes through the pre-trained human face tampering judgment model based on the deep learning network, and the comprehensive accuracy of the human face tampering identification of the algorithm is up to 95% through data processing and the optimization of the human face tampering judgment network.
By the method, the identification of the picture or the real-time video can help to attack the illegal behavior of invading other people, thereby being beneficial to protecting the privacy and the portrait right of the human face.
Drawings
Fig. 1 is a flowchart of a method for identifying whether a face image is tampered according to the present invention.
FIG. 2-1 is an original image; fig. 2-2 is a schematic diagram illustrating the effect of tamper identification by the method of the present invention.
FIG. 3-1 is another original image; fig. 3-2 is a schematic diagram illustrating the effect of tamper detection by the method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments.
As shown in fig. 1, the method for identifying whether a face image is tampered includes:
s1, acquiring coordinate information of all face frames in the image to be identified;
and obtaining a target face detection result corresponding to the target image based on the target image and a pre-trained face detection model, wherein the used face detection model is only required to be the existing general face detection technology, and the description is not repeated here, and the face detection model is used for detecting the position information of a face frame in the image, namely the coordinate information of the face frame in the image.
S2, processing the face frame to obtain a sectional drawing containing partial face surrounding background; and carrying out outward expansion on the obtained face frame and eliminating the mutual influence under the dense face scene, and cutting the processed face frame to obtain a cutout which has no mutual influence and contains partial face peripheral background.
When the deep learning model is used for extracting the characteristics of the face region and judging whether the face is tampered or not according to the characteristics, the deep learning model learns and judges by using the texture, edge, chroma mutation and other information of the face tampered region. If the face detection result is directly subjected to frame matting, the face tampered edge is lost in most cases. In view of this, in the method of the present invention, the result frame of the face detection algorithm is extended, so that the face matting includes a part of the face surrounding background region, and in a dense scene, the face detection result is extended so that the matting includes a part of the information of the face near the face, which may cause the misjudgment of the face to be judged. And eliminating the information of the faces which are not to be judged in the face frame by using the coordinate position relation of the face extension frame. And finally, cutting according to the processed face frame to obtain a sectional drawing which has no mutual influence and contains partial face peripheral backgrounds.
And S3, sending the matting containing the surrounding background of the partial human face and without mutual influence obtained in the step S2 to a pre-trained human face tampering judgment model based on the deep learning network for judgment, and outputting information whether the human face in the matting is tampered.
In this embodiment, the pre-training of the face tampering determination model includes:
collecting 1 ten thousand original video files and video files subjected to face tampering in the original video files as initial samples, collecting 6.6 thousands of sample pictures of the video files of the initial samples, and marking whether the faces in the sample pictures are tampered or not;
and taking the initial sample and the labeling result as input, and training by adjusting the training parameters for multiple times to obtain a human face tampering judgment model.
As a preferred embodiment, the trained face tampering judgment model can be cut and accelerated optimized by avx2, so that the processing speed of the model can meet the requirement of processing the monitoring video in real time. The operations of cutting and accelerating optimization are all used to enable the speed of the whole algorithm to meet the requirement of processing videos in real time, and can be realized by adopting the conventional operations, and details are not repeated here.
By the method, the tampered picture is identified to judge the accuracy of the method, and the judging effect is shown in fig. 2 and fig. 3, wherein fig. 2-1 is an original picture, fig. 2-2 is a picture obtained by face tampering on fig. 2-1, and after the method is used for identifying fig. 2-2, the tampered face in fig. 2-2 can be obtained, and suspected tampering marking is carried out. Fig. 2-1 is another original picture, and fig. 3-2 is a picture obtained after the face tampering is performed on fig. 3-1, and after the face tampering is performed on fig. 3-2 by using the method of the present invention, the tampered face in fig. 3-2 can be obtained, and the suspected tampering indication is performed.

Claims (5)

1. A method for identifying whether a face image is tampered, which is characterized by comprising the following steps:
s1, acquiring coordinate information of all face frames in the image to be identified;
s2, processing the face frame to obtain a sectional drawing containing partial face surrounding background;
and S3, sending the matte to a pre-trained human face tampering judgment model based on a deep learning network for judgment, and outputting information whether the human face in the matte is tampered or not after judgment.
2. The method for authenticating whether a face image is falsified according to claim 1, wherein: in step S3, the pre-training of the face falsification determination model includes:
collecting m original video files and a video file subjected to face tampering in the original video files as initial samples, collecting n sample pictures by using the initial samples, and marking whether the face in the sample pictures is tampered or not;
and taking the initial sample and the labeling result as input, and training by adjusting the training parameters for multiple times to obtain a human face tampering judgment model.
3. The method for authenticating whether a face image is falsified according to claim 2, wherein: and (4) cutting the trained face tampering judgment model, and using avx2 to perform accelerated optimization so that the processing speed of the model can meet the requirement of processing the monitoring video in real time.
4. The method for authenticating whether a face image is falsified according to any one of claims 1 to 3, wherein the step S1 includes:
and detecting the image to be identified through a pre-trained face detection model, and detecting the coordinate information of a face frame in the image.
5. The method for authenticating whether a face image is falsified according to claim 4, wherein the step S2 includes:
and carrying out outward expansion on the obtained face frame and eliminating the mutual influence under the dense face scene, and cutting the processed face frame to obtain a cutout which has no mutual influence and contains partial face peripheral background.
CN201911296878.3A 2019-12-16 2019-12-16 Method for identifying whether face image is tampered Pending CN111160152A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911296878.3A CN111160152A (en) 2019-12-16 2019-12-16 Method for identifying whether face image is tampered

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911296878.3A CN111160152A (en) 2019-12-16 2019-12-16 Method for identifying whether face image is tampered

Publications (1)

Publication Number Publication Date
CN111160152A true CN111160152A (en) 2020-05-15

Family

ID=70557182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911296878.3A Pending CN111160152A (en) 2019-12-16 2019-12-16 Method for identifying whether face image is tampered

Country Status (1)

Country Link
CN (1) CN111160152A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754393A (en) * 2018-12-19 2019-05-14 众安信息技术服务有限公司 A kind of tampered image identification method and device based on deep learning
CN110414437A (en) * 2019-07-30 2019-11-05 上海交通大学 Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754393A (en) * 2018-12-19 2019-05-14 众安信息技术服务有限公司 A kind of tampered image identification method and device based on deep learning
CN110414437A (en) * 2019-07-30 2019-11-05 上海交通大学 Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion

Similar Documents

Publication Publication Date Title
CN109858371B (en) Face recognition method and device
Korshunov et al. Deepfakes: a new threat to face recognition? assessment and detection
KR102195922B1 (en) Internet-based facial beautification system
EP4123503A1 (en) Image authenticity detection method and apparatus, computer device and storage medium
CN107077589B (en) Facial spoofing detection in image-based biometrics
CN103914839B (en) Image stitching and tampering detection method and device based on steganalysis
CN111385283B (en) Double-recording video synthesis method and double-recording system of self-service equipment
CN111091098B (en) Training method of detection model, detection method and related device
US10061996B1 (en) Face recognition method and system for personal identification and authentication
CN111126366B (en) Method, device, equipment and storage medium for distinguishing living human face
CN110414437A (en) Face datection analysis method and system are distorted based on convolutional neural networks Model Fusion
KR20220042301A (en) Image detection method and related devices, devices, storage media, computer programs
CN109977993A (en) A kind of fire alarm method, apparatus and computer readable storage medium
JP2001127990A (en) Information communication system
CN113573044B (en) Video data processing method and device, computer equipment and readable storage medium
CN108804893A (en) A kind of control method, device and server based on recognition of face
CN106055632B (en) Video authentication method based on scene frame fingerprint
CN108563997A (en) It is a kind of establish Face datection model, recognition of face method and apparatus
CN111160152A (en) Method for identifying whether face image is tampered
CN114881838B (en) Bidirectional face data protection method, system and equipment for deep forgery
CN106682669A (en) Image processing method and mobile terminal
JP2009156948A (en) Display control device, display control method, and display control program
CN111126373A (en) Internet short video violation judgment device and method based on cross-modal identification technology
TWI777689B (en) Method of object identification and temperature measurement
CN113468954B (en) Face counterfeiting detection method based on local area features under multiple channels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination