CN111754396A - Face image processing method and device, computer equipment and storage medium - Google Patents

Face image processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111754396A
CN111754396A CN202010730209.9A CN202010730209A CN111754396A CN 111754396 A CN111754396 A CN 111754396A CN 202010730209 A CN202010730209 A CN 202010730209A CN 111754396 A CN111754396 A CN 111754396A
Authority
CN
China
Prior art keywords
face image
face
target
image
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010730209.9A
Other languages
Chinese (zh)
Other versions
CN111754396B (en
Inventor
张勇
罗宇辰
严骏驰
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010730209.9A priority Critical patent/CN111754396B/en
Publication of CN111754396A publication Critical patent/CN111754396A/en
Priority to PCT/CN2021/100912 priority patent/WO2022022154A1/en
Priority to US17/989,169 priority patent/US20230085605A1/en
Application granted granted Critical
Publication of CN111754396B publication Critical patent/CN111754396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/95Pattern authentication; Markers therefor; Forgery detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a face image processing method, a face image processing device, a computer device and a storage medium. The method comprises the following steps: acquiring a first face image and a second face image, the first face image and the second face image being images containing a real face; processing the first face image to generate a first updated face image having non-real face image characteristics; adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image; acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming a face area of the first face image; and fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image. By adopting the method, various target face images can be generated.

Description

Face image processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing a facial image, a computer device, and a storage medium.
Background
With the development of artificial intelligence technology, face changing technology is developed, that is, a face in a face image is replaced by another face to obtain a false face image. More and more application scenes need to use false face images, for example, recognition of the false face images in a face recognition scene, generation of a funny video by using the false face images, and the like. However, in the current false face image, the face in the face image is directly replaced by another face, and the generated false face image has low diversity.
Disclosure of Invention
In view of the above, it is necessary to provide a face image processing method, apparatus, computer device, and storage medium capable of generating diverse target face images in view of the above technical problems.
A method of facial image processing, the method comprising:
acquiring a first face image and a second face image, the first face image and the second face image being images containing a real face;
processing the first face image to generate a first updated face image having non-real face image characteristics;
adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image;
acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming a face area of the first face image;
and fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image.
In one embodiment, the processing of the first face image to generate a first updated face image having non-real face image characteristics comprises:
and generating a Gaussian noise value, and adding the Gaussian noise value to the pixel value of the first face image to obtain a first updated face image with the characteristics of the unreal face image.
In one embodiment, after the obtained current face detection model is used as the face detection model, the method further includes:
the method comprises the steps of obtaining a face image to be detected, inputting the face image to be detected into a face detection model for detection, obtaining a detection result, and generating alarm information when the detection result is a non-real face image.
A facial image processing apparatus, the apparatus comprising:
an image acquisition module for acquiring a first face image and a second face image, the first face image and the second face image being images containing a real face;
the image processing module is used for processing the first face image to generate a first updated face image with the characteristics of the unreal face image;
the color adjusting module is used for adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image;
the mask acquisition module is used for acquiring a target face mask of the first face image, and the target face mask is generated by randomly deforming the face area of the first face image;
and the image fusion module is used for fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image.
In one embodiment, the image processing module comprises:
the Gaussian blur unit is used for calculating the weight of the pixel points in the first face image by using a Gaussian function to obtain a pixel point blur weight matrix; and calculating to obtain the fuzzy pixel value of the pixel point according to the original pixel value of the pixel point in the first face image and the fuzzy weight matrix of the pixel point, and generating a first updated face image.
In one embodiment, the image processing module comprises:
the image compression unit is used for acquiring a compression rate, and compressing the first face image by using the compression rate to obtain a compressed first face image; and taking the compressed first face image as a first updated face image with the characteristics of the non-real face image.
In one embodiment, the image processing module comprises:
and the noise condition unit is used for generating a Gaussian noise value, and adding the Gaussian noise value into the pixel value of the first face image to obtain a first updated face image with the characteristics of the unreal face image.
In one embodiment, the mask retrieving module includes:
the key point extracting unit is used for extracting face key points in the first face image and determining a face area of the first face image according to the face key points;
and the calling unit is used for randomly adjusting the positions of the key points of the face in the face area of the first face image to obtain a deformed face area, and generating a target face mask according to the deformed face area.
In one embodiment, the facial image processing apparatus further includes:
the occlusion detection module is used for carrying out face occlusion detection on the second face image to obtain a face occlusion area;
the mask adjusting module is used for adjusting the target face mask according to the face shielding area to obtain an adjusted face mask;
the image fusion module is further used for fusing the first adjusted face image and the second face image according to the adjusted face mask to obtain a target face image.
In one embodiment, the mask adjustment module is further configured to calculate a difference between a mask value of a pixel in the target face mask and a shielding value of a pixel in the face shielding region, and use the difference as the mask adjustment value; and adjusting the face mask according to the mask adjusting value.
In one embodiment, the color adjustment module is further configured to obtain a target color adjustment algorithm identifier, and invoke a target color adjustment algorithm according to the target color adjustment algorithm identifier, where the target color adjustment algorithm includes at least one of a color migration algorithm and a color matching algorithm; and adjusting the color distribution of the first updated face image to be consistent with the color distribution of the second face image based on a target color adjustment algorithm to obtain a first adjusted face image.
In one embodiment, the image fusion module comprises:
the calling unit is used for acquiring a target image fusion algorithm identifier and calling a target image fusion algorithm according to the target image fusion algorithm identifier; the target image fusion algorithm comprises at least one of a transparent mixing algorithm, a Poisson fusion algorithm and a neural network algorithm;
and the fusion unit is used for fusing the first adjusted face image and the second face image based on the target face mask by using a target image fusion algorithm to obtain a target face image.
In one embodiment, the fusion unit is further configured to determine a first adjusted face region from the first adjusted face image according to the target face mask; and fusing the first adjusted face area to the position of the face area in the second face image to obtain the target face image.
In one embodiment, the fusion unit is further configured to determine a region of interest from the first adjusted face image according to the face mask, calculate a first gradient field of the region of interest and a second gradient field of the second face image; determining a fusion gradient field according to the first gradient field and the second gradient field, and calculating a fusion divergence field by using the fusion gradient field; and determining a second fusion pixel value based on the fusion divergence field, and obtaining the target face image according to the second fusion pixel value.
In one embodiment, the facial image processing apparatus further includes:
a data acquisition module for acquiring a real face image dataset and a target face image dataset, each target face image in the target face image dataset being generated using a different first and second real face image in the real face image dataset, the target face image dataset being taken as the current face image dataset.
The model training module is used for taking each real face image in the real face image data set as positive sample data, taking each current face image in the current face image data set as negative sample data, and training by using a deep neural network algorithm to obtain a current face detection model;
the model testing module is used for acquiring testing face image data, testing the current face detection model by using the testing face image data to obtain the corresponding accuracy of the current face detection model, and enabling the testing face image data and the real face image data set to be different data sets;
an update data acquisition module for acquiring an update target face image dataset when the accuracy is less than a preset accuracy threshold, the update target face image dataset comprising each target face image and each update target face image in the target face image dataset, each update target face image being regenerated using a different first real face image and second real face image in the real face image dataset;
and the iteration loop module is used for taking the updated target face image data set as a current face image data set, returning to the step of taking each real face image in the real face image data set as positive sample data, taking each current face image in the current face image data set as negative sample data, training by using a deep neural network algorithm, obtaining a current face detection model, and executing the step until the accuracy exceeds a preset accuracy threshold value, and taking the obtained current face detection model as the face detection model.
In one embodiment, the facial image processing apparatus further includes:
the image detection module is used for acquiring a face image to be detected, inputting the face image to be detected into the face detection model for detection to obtain a detection result, and generating alarm information when the detection result is a non-real face image.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a first face image and a second face image, the first face image and the second face image being images containing a real face;
processing the first face image to generate a first updated face image having non-real face image characteristics;
adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image;
acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming a face area of the first face image;
and fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a first face image and a second face image, the first face image and the second face image being images containing a real face;
processing the first face image to generate a first updated face image having non-real face image characteristics;
adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image;
acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming a face area of the first face image;
and fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image.
According to the face image processing method, the face image processing device, the computer equipment and the storage medium, the first face image is processed to generate a first updated face image with the characteristics of a non-real face image, and then the color distribution of the first updated face image is adjusted according to the color distribution of the second face image to obtain a first adjusted face image; and acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming the face area of the first face image. The first adjusted face image and the second face image are fused according to the target face mask to obtain a target face image, the target face image constructed by the method can accurately imitate the effect of a false face image, such as the false face image containing the characteristics of a non-real face image, the color distribution of the non-real face image, the shape of a face area of the non-real face image and the like, and when a large number of target face images are generated by the method, the obtained target face mask is generated by randomly deforming the face area of the first face image, so that the generated large number of target face images have rich diversity.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of a facial image processing method;
FIG. 2 is a flow diagram illustrating a method for facial image processing according to one embodiment;
FIG. 3 is a schematic diagram of generating a first updated face image in one embodiment;
FIG. 4 is a schematic flow chart illustrating the generation of a target face mask in one embodiment;
FIG. 5 is a diagram illustrating an example of adjusting a face mask;
FIG. 6 is a schematic diagram of a process for obtaining a first adjusted facial image according to one embodiment;
FIG. 7 is a schematic flow chart illustrating obtaining a target face image according to one embodiment;
FIG. 8 is a schematic diagram of a process for obtaining an image of a target face according to another embodiment;
FIG. 9 is a schematic diagram of a process for obtaining an image of a target face according to yet another embodiment;
FIG. 10 is a schematic flow chart of obtaining a face detection model in one embodiment;
FIG. 11 is a schematic flow chart illustrating the process of obtaining a target facial image in one embodiment;
FIG. 12 is a schematic diagram of a randomly selected image processing method in the embodiment of FIG. 11;
FIG. 13 is a schematic diagram of randomly selected color adjustment algorithm names in the embodiment of FIG. 11;
FIG. 14 is a schematic illustration of mask generation and deformation in the embodiment of FIG. 11;
FIG. 15 is a schematic diagram illustrating names of randomly selected image fusion algorithms in the embodiment of FIG. 11;
FIG. 16 is a block diagram of a method for facial image processing in accordance with one embodiment;
FIG. 17 is a partially schematic illustration of a target face image generated in the embodiment of FIG. 16;
FIG. 18 is a schematic diagram of an application environment of the facial image processing method in the embodiment of FIG. 16;
FIG. 19 is a block diagram showing the construction of a face image processing apparatus according to an embodiment;
FIG. 20 is a diagram illustrating an internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The scheme provided by the embodiment of the application relates to technologies such as artificial intelligence image detection and deep learning, and is specifically explained by the following embodiments:
the face image processing method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The server 104 acquires a first face image and a second face image, which are images containing real faces, from the terminal 102; server 104 processes the first facial image to generate a first updated facial image having non-real facial image characteristics; adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image; acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming a face area of the first face image; the server 104 fuses the first adjusted face image and the second face image according to the target face mask to obtain a target face image. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a method for processing a face image is provided, and the method is described by taking the application to the server in fig. 1 as an example, it is understood that the method can also be applied to a terminal. In this embodiment, the method includes the steps of:
step 202, a first face image and a second face image are obtained, and the first face image and the second face image are images containing real faces.
The face image refers to a face image which is really present and is not forged, and includes a human face image, an animal face image and the like. The first face image is a source face image needing face image fusion, and the second face image is a target face image needing face image fusion.
Specifically, the server acquires the first face image and the second face image, wherein the server may acquire the face images in a plurality of different ways, for example, the server acquires the face image uploaded by the terminal. The server can acquire the facial image from a preset facial image database. The server may also retrieve the facial image from a third party's platform. The server may be a facial image captured from the internet. The server may acquire the facial image from the video. The first face image and the second face image obtained may be face images of the same type, for example, face images of the same animal, for example, face images of both men. Different types of face images are also possible, for example, the first face image is a face image of a cat and the second face image is a face image of a dog. For another example, the first face image is a face image of a male person, and the second face image is a face image of a female person.
In one embodiment, the first face image and the second face image are acquired, and when the sizes of the first face image and the second face image are not consistent, the sizes of the first face image and the second face image are adjusted to be consistent. For example, the first face image may be resized to be the same size as the second face image. The second face image may also be resized to be the same size as the first face image. The preset size can also be obtained, and the size of the first face image and the size of the second face image are respectively adjusted to be consistent with the preset size. For example, the size of the first face image is 2.5 × 3.5cm, the size of the second face image is 3.5 × 4.9cm, and the size of the first face image and the size of the second face image are adjusted to be consistent with the preset size of 3.5 × 4.9 cm.
Step 204, the first face image is processed to generate a first updated face image having non-real face image characteristics.
The unreal face image is a face image which is not real and is forged by a technical means, for example, a face-changed image obtained by an artificial intelligence face-changing technology. The non-real face image characteristics refer to image characteristics of the non-real face image, and include smooth image transition, inconsistent image definition, image noise and the like. The first updated face image is a face image obtained after image processing, and the first updated face image has a non-real face image characteristic. For example, the non-real face image characteristic has an effect of using a face image generated by a countermeasure generation network.
In particular, the server may process the first face image using image processing algorithms including image blurring algorithms, image compression algorithms, random noise addition algorithms, and the like. The image fuzzy algorithm comprises a Gaussian fuzzy algorithm, a mean fuzzy algorithm, a double fuzzy algorithm, a shot fuzzy algorithm, a shift-axis fuzzy algorithm and the like. The image compression algorithm includes jpeg (joint Photographic Experts group) compression algorithm, huffman coding compression algorithm, run-length coding compression algorithm, and the like. The random noise addition algorithm includes a gaussian noise addition algorithm, a poisson noise addition algorithm, a salt and pepper noise addition algorithm, and the like. The server may randomly select an image processing algorithm to process the first face image, so as to obtain a first updated face image with a characteristic of a non-real face image, for example, the obtained first updated face image may have a characteristic of smooth transition or a characteristic of inconsistent image definition or a characteristic of image noise. Or selecting various image processing algorithms to process the first face image, and taking the finally processed image as a first updated face image with the characteristics of the non-real face image. For example, the obtained first updated face image may have a characteristic of smooth transition and a characteristic of inconsistent image sharpness, or the obtained first updated face image may have a characteristic of smooth transition and image noise or a characteristic of inconsistent image sharpness and image noise. Or the obtained first updating face image has the characteristics of smooth image transition, inconsistent image definition and image noise.
In an embodiment, the server may first process the first face image by using an image blurring algorithm to obtain a processed image, then compress the processed image by using an image compression algorithm to obtain a compressed image, and use the compressed image as the first updated face image, or add random noise to the compressed image by using a random noise addition algorithm to obtain the first updated face image. By processing the images, the effect of resisting the images generated by the generation network is simulated, and the diversity of the generated target face images is improved.
And step 206, adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image.
Here, the color distribution refers to a distribution of an image in an RGB (one color standard) color space. The first adjusted face image is a face image obtained by adjusting the color distribution of the first updated face image, and the color distribution after adjustment is close to the color distribution of the second face image.
Specifically. And the server adjusts the color distribution of the first updated face image according to the color distribution of the second face image by using a color adjustment algorithm to obtain a first adjusted face image, wherein the color adjustment algorithm can comprise a linear color migration algorithm, an LAB space color migration algorithm, a probability density-based color migration and color sub-histogram matching algorithm and the like. The server may randomly select a color adjustment algorithm each time the target face image is generated, and then adjust the color distribution of the first updated face image according to the color distribution of the second face image according to the randomly selected color adjustment algorithm to obtain a first adjusted face image.
In step 208, a target face mask of the first face image is obtained, wherein the target face mask is generated by randomly deforming the face region of the first face image.
The face mask is an image obtained by initializing all pixel values in the face region in the first face image to 255, that is, initializing the face region to white, and initializing pixel values in regions other than the face region in the first face image to 0, that is, initializing regions other than the face region to black. The target face mask is an image generated by randomly deforming the face region of the first face image.
Specifically, the server acquires a target face mask of the first face image, wherein the target face mask is a face key point extracted in advance to obtain the first face image, a face region is obtained according to the face key point, then the face region is randomly deformed to obtain a face image with a deformed face region, and then a corresponding target face mask is generated according to the face image with the deformed face region. When the face region is randomly deformed, the area of the face region may be acquired, and then the size of the face region may be randomly adjusted, for example, the area of the face region is 20, and the area of the face region is adjusted to be 21. The boundary line of the face area can be acquired, and the position or the type of the boundary line of the face area can be randomly adjusted. For example, the boundary line of the straight line type is adjusted to a curve. For example, the position of the boundary line is adjusted by randomly shifting the position of the center point of the boundary line. For example, the coordinate of the center point of the boundary line is (1,1), and the coordinate may be randomly adjusted to (1, 2). The boundary key points of the face region may also be obtained, and the positions of the boundary key points of the face region may be randomly adjusted, for example, the positions of all the boundary key points are randomly shifted.
In an embodiment, the server may also generate a face mask of the first face image in advance, and then randomly deform a face region in the face mask of the first face image to obtain the target face mask. In one embodiment, the server may also retrieve the target face mask of the first face image directly from the database.
And step 210, fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image.
Wherein the target face image is a face image fused with the second face image according to the first adjusted face image, and the target face image is a face image having non-reality, i.e., a false face image.
Specifically, the server fuses the first adjusted face image and the second face image according to a target face mask by using an image fusion algorithm to obtain the target face image, wherein the image fusion algorithm comprises an alpha blending (alpha blending) algorithm, a poisson fusion algorithm, a laplacian pyramid fusion algorithm, an image fusion algorithm based on wavelet transform, an image fusion algorithm based on a neural network and the like, and the server randomly selects the image fusion algorithm first and then fuses according to the randomly selected image fusion algorithm to obtain the target face image each time the first adjusted face image and the second face image are fused according to the target face mask.
In the image processing method, a first updated face image with non-real face image characteristics is generated by processing a first face image, and then the color distribution of the first updated face image is adjusted according to the color distribution of a second face image to obtain a first adjusted face image; and acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming the face area of the first face image. The first adjusted face image and the second face image are fused according to the target face mask to obtain a target face image, the target face image constructed by the method can accurately imitate the effect of a false face image, such as the false face image containing the characteristics of a non-real face image, the color distribution of the non-real face image, the shape of a face area of the non-real face image and the like, and when a large number of target face images are generated by the method, the obtained target face mask is generated by randomly deforming the face area of the first face image, so that the generated large number of target face images have rich diversity.
In one embodiment, as shown in FIG. 3, the processing 204 of the first face image to generate a first updated face image having non-real face image characteristics includes:
step 302a, calculating the weight of the pixel point in the first face image by using a gaussian function to obtain a pixel point fuzzy weight matrix.
Wherein, the gaussian function refers to a normally distributed density function, and the two-dimensional gaussian function is shown in the following formula (1):
Figure BDA0002602857960000121
wherein, σ is a Gaussian radius which is preset, and x and y are coordinates of pixel points in the first face image.
Specifically, the server obtains a preset Gaussian radius, and calculates the weight of a pixel point in the first face image by using a Gaussian function to obtain a pixel point fuzzy weight matrix.
Step 302b, calculating to obtain a fuzzy pixel value of the pixel point according to the original pixel value of the pixel point in the first face image and the fuzzy weight matrix of the pixel point, and generating a first updated face image.
Specifically, the server performs convolution operation by using the original pixel values of the pixels in the first face image and the pixel fuzzy weight matrix to obtain the fuzzy pixel values of the pixels, and obtains a first updated face image according to the fuzzy pixel value of each pixel.
In one embodiment, the server may blur the first face image using a gaussian convolution. The scale of the gaussian convolution includes 3x3,5x5,7x7,9x9, 11x11, and the like. The server randomly selects one scale of Gaussian convolution for blurring when the first face image is blurred by the Gaussian convolution every time to obtain the blurred first face image, and the blurred first face image is used for generating the target face image, so that the diversity of the generated target face image is improved.
In the above embodiment, the weight of the pixel point in the first face image is calculated by using the gaussian function to obtain the pixel point fuzzy weight matrix, then the fuzzy pixel value of the pixel point is calculated according to the original pixel value of the pixel point in the first face image and the pixel point fuzzy weight matrix to generate the first updated face image, so that the first updated face image can be quickly obtained, the subsequent processing is facilitated, and the effect of generating the target face image is ensured to achieve the effect of the false face image.
In one embodiment, as shown in FIG. 3, the processing 204 of the first face image to generate a first updated face image having non-real face image characteristics includes:
step 304a, obtaining a compression ratio, and compressing the first face image by using the compression ratio to obtain a compressed first face image; and taking the compressed first face image as a first updated face image with the characteristics of the non-real face image.
The compression ratio is a ratio of the memory occupied by the compressed face image to the memory occupied by the compressed face image, and the compression ratio is preset.
Specifically, when the server compresses the face image every time, the server randomly obtains a compression rate used in current compression from a preset compression rate, then compresses the first face image by using the compression rate to obtain a compressed first face image, and uses the compressed first face image as a first updated face image with the characteristics of an unreal face image, so that the first face images with different definitions can be obtained, subsequent use is facilitated, and the effect of generating the target face image is ensured to achieve the effect of a false face image and improve the diversity of the generated target face image.
In one embodiment, as shown in FIG. 3, the processing 204 of the first face image to generate a first updated face image having non-real face image characteristics includes:
step 306a, generating a gaussian noise value, and adding the gaussian noise value to the pixel value of the first face image to obtain a first updated face image with the characteristics of the unreal face image.
Wherein, gaussian noise refers to noise whose probability density function follows gaussian distribution. The gaussian noise value refers to a random number sequence randomly generated according to the mean and variance of the gaussian noise.
Specifically, the server prestores different mean values and variances of Gaussian noises, randomly selects the mean value and the variance of the Gaussian noises required to be used at present when the server adds the noises each time, generates Gaussian noise values according to the mean value and the variance of the Gaussian noises, then adds the Gaussian noise values into pixel values of the first face image, compresses the obtained pixel values to pixel value interval contents to obtain the first face image with non-real face image characteristics, generates a first updated face image with the non-real face image characteristics by adding the Gaussian noise values, facilitates subsequent use, ensures that the effect of generating the target face image achieves the effect of a false face image, and improves the diversity of the generated target face image due to the random selection of the mean value and the variance of the Gaussian noises required to be used.
In one embodiment, as shown in fig. 4, step 208, obtaining a target face mask of the first face image, the target face mask being generated by randomly deforming a face region of the first face image, includes:
step 402, extracting face key points in the first face image, and determining a face area of the first face image according to the face key points.
The face key points are used for representing the features of the face.
Specifically, the server extracts the face key points in the first face image by using a face key Point extraction algorithm, wherein the face key Point extraction algorithm includes a feature Point Distribution Model (PDM) -based extraction algorithm, a Cascaded shape Regression (CPR) -based algorithm, a deep learning-based algorithm, and the like. The facial keypoint extraction algorithm may specifically be an ASM (Active Shape Model) algorithm, an AAM (Active appearance Model) algorithm, a CPR (Cascaded posture regression), an SDM (supervisory drop Method) algorithm, and a Deep Convolutional Neural Network (DCNN) algorithm, and then sequentially connects the extracted facial keypoints into polygons, where the inside of the polygons is a face region of the first face image.
In a specific embodiment, the server extracts 68 face key points using the landmark (a technique for face feature point extraction) algorithm, and obtains the face region of the first face image by connecting the face key points into polygons in sequence.
In one embodiment, the server may generate a face mask based on determining the face region of the first face image. Namely, a face mask is generated by using the first face image without deformation, and the generated face mask is used as a target face mask to be subjected to subsequent processing, so that the generated target face image has diversity.
Step 404, randomly adjusting the position of the key point of the face in the face area of the first face image to obtain a deformed face area, and generating a target face mask according to the deformed face area.
Specifically, the server randomly changes the positions of face key points in the face area of the first face image, connects the face key points with changed positions into polygons in sequence to obtain the deformed face area, and then generates a target face mask according to the deformed face area and other areas of the first face image. The position of each face key point is randomly changed, so that a randomly changed value of the face key point and an original value of the face key point can be obtained, and the sum of the randomly changed value of the face key point and the original value of the face key point is calculated to obtain a changed position value of the face key point.
In one embodiment, after extracting the face key points in the first face image, the face key point positions in the first face image may be directly adjusted randomly to obtain a deformed face region, and the target face mask may be generated according to the deformed face region.
In the above embodiment, the deformed face region is obtained by randomly adjusting the position of the key point of the face in the face region of the first face image, the target face mask is generated according to the deformed face region, and the target face mask is used for subsequent processing, so that the diversity of the generated target face image is improved.
In one embodiment, after step 208, that is, after acquiring the target face mask of the first face image, the target face mask being generated by randomly deforming the face region of the first face image, the method further includes the steps of:
carrying out face shielding detection on the second face image to obtain a face shielding area; and adjusting the target face mask according to the face shielding area to obtain an adjusted face mask.
The occlusion detection means detecting whether or not a face area in the second face image is occluded. The face occlusion region refers to a region in the second face image in which the face is occluded. The adjustment of the face mask is to remove the face-shielding region from the face region in the target face mask to obtain the face mask.
Specifically, the server performs face occlusion detection on the second face image by using a deep learning segmentation network algorithm to obtain each segmentation region, and determines a face occlusion region from the segmentation regions. And then adjusting the binarization value corresponding to each pixel point in the target face mask according to the binarization value corresponding to each pixel point in the face shielding region to obtain the adjusted binarization value of each pixel point, and adjusting the face mask according to the adjusted binarization value of each pixel point. The deep learning segmentation network algorithm may be a Unet (a semantic segmentation network based on FCN) network algorithm FCN (full convolutional neural network) network algorithm, a SegNet (convolutional neural network with encoder-decoder structure) network algorithm, a Deeplab (void convolutional network) network algorithm, and the like.
In one embodiment, as shown in FIG. 5, a schematic diagram of the adjusted face mask is obtained. The target face mask 50 is adjusted according to the face-shielding region, and an adjusted face mask 52 is obtained.
Step 210, fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image, comprising the steps of:
and fusing the first adjusted face image and the second face image according to the adjusted face mask to obtain a target face image.
Specifically, the server fuses the first adjusted face image and the second face image by using an image fusion algorithm according to the adjusted face mask to obtain a target face image.
In the above embodiment, the adjusted face mask is obtained by performing occlusion detection on the second face image, and the target face image is obtained by fusing the first adjusted face image and the second face image using the adjusted face mask, so that the diversity of the generated target face image is improved.
In one embodiment, the method for adjusting the target face mask according to the face occlusion region to obtain an adjusted face mask includes the steps of:
calculating a difference value between a pixel mask value in the target face mask and a pixel shielding value in the face shielding area, and taking the difference value as a mask adjustment value; and adjusting the face mask according to the mask adjusting value.
The pixel mask value refers to a binarization value of a pixel in a target face mask, and the shielding value refers to a binarization value of a pixel in a face shielding region. The mask adjustment value refers to a binarization value of each pixel point of the adjusted face mask.
Specifically, the server calculates a difference value between the mask value of each pixel point in the target face mask and the mask value of the pixel point in the face shielding region in the second face image to obtain a mask adjustment value. In a specific embodiment, the values of the pixel points in the face area in the target face mask are 1, and the pixel points in other areas are 0. And the pixel point in the face shielding area in the second face image is 1, and the non-shielding area is 0. And subtracting the value of each pixel point in the second face image from the value of each pixel point in the target face mask to obtain the value of each pixel point after adjustment, and obtaining the adjusted face mask according to the value of each pixel point after adjustment.
In the above embodiment, the difference between the mask value of the pixel point in the target face mask and the mask value of the pixel point in the face shielding region is directly calculated, the difference is used as the mask adjustment value, the face mask is adjusted according to the mask adjustment value, and it is ensured that the generated target face image achieves the effect of a false face when the target face image is generated by the adjusted face mask.
In one embodiment, as shown in fig. 6, the step 206 of adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image includes:
step 602, obtaining a target color adjustment algorithm identifier, and calling a target color adjustment algorithm according to the target color adjustment algorithm identifier, where the target color adjustment algorithm includes at least one of a color migration algorithm and a color matching algorithm.
Wherein the target color adjustment algorithm identification is used to uniquely identify the color adjustment algorithm. Both the target color migration algorithm and the color matching algorithm are used to adjust the color distribution. Wherein, the color migration algorithm comprises: linear color migration algorithms, LAB space color migration algorithms, probability density based color migration algorithms, and the like. The color matching algorithm includes a color histogram matching algorithm and the like.
Specifically, the server randomly selects a target color adjustment algorithm each time the color distribution of the first updated face image is adjusted according to the color distribution of the second face image, and the target color adjustment algorithm identifier corresponding to the selected target color adjustment algorithm is obtained. The target color adjustment algorithm is then invoked using the target color adjustment algorithm identification. The calling interface of each target color adjustment algorithm can be generated in advance, the corresponding calling interface is obtained according to the target color adjustment algorithm identification, and the calling interface is used for calling the target color adjustment algorithm.
Step 604, adjusting the color distribution of the first updated face image to be consistent with the color distribution of the second face image based on the target color adjustment algorithm, so as to obtain a first adjusted face image.
Specifically, the server executes a target color adjustment algorithm to adjust the color distribution of the first updated face image to be consistent with the color distribution of the second face image, so as to obtain a first adjusted face image. In one embodiment, the first adjusted face image may be obtained when the color distribution of the first updated face image is adjusted to be within a preset threshold value of the color distribution of the second face image.
Through the embodiment, the color distribution of the first updated face image is adjusted to be consistent with the color distribution of the second face image, the first adjusted face image is obtained, the first adjusted face image has the color information of the second face image, so that the generated target face image does not contain an obvious face change boundary, the effect that the generated target face image can accurately simulate false faces is ensured, and in addition, the target color adjustment algorithm is randomly selected to perform color distribution adjustment every time the color distribution adjustment is performed, and the diversity of the generated target face image is improved.
In one embodiment, as shown in fig. 7, the step 210 of fusing the first adjusted face image and the second face image according to the target face mask to obtain the target face image includes:
step 702, acquiring a target image fusion algorithm identifier, and calling a target image fusion algorithm according to the target image fusion algorithm identifier; the target image fusion algorithm comprises at least one of a transparent hybrid algorithm, a poisson fusion algorithm and a neural network algorithm.
The target image fusion algorithm identification is used for uniquely identifying the target image fusion algorithm and calling the corresponding target image fusion algorithm.
Specifically, the server randomly selects a target image fusion algorithm identifier from the stored image fusion algorithm identifiers, and executes a corresponding image fusion algorithm according to the target image fusion algorithm identifier. In one embodiment, the server acquires a calling interface of the corresponding target image fusion algorithm according to the target image fusion algorithm identifier, and calls the target image fusion algorithm by using the calling interface. The target image fusion algorithm comprises at least one of a transparent hybrid algorithm, a poisson fusion algorithm and a neural network algorithm. The neural network algorithm is to use a neural network to train in advance to obtain an image fusion model and use the image fusion model to perform fusion.
Step 704, fusing the first adjusted face image and the second face image based on the target face mask by using a target image fusion algorithm to obtain a target face image.
Specifically, the server performs fusion based on a target face mask by using a randomly selected target image fusion algorithm each time when a first adjusted face image and a second face image are fused, so as to obtain a fused face image, namely a target face image. In one embodiment, the server inputs the target face mask, the first adjusted face image and the second face image into an image fusion model obtained by using neural network training, and obtains an output fused face image, namely the target face image.
In the above embodiment, the target face image is obtained by fusing the first adjusted face image and the second face image based on the target face mask using a randomly selected target image fusion algorithm, so that the generated target face image has diversity.
In one embodiment, as shown in fig. 8, the step 704 of fusing the first adjusted face image and the second face image based on the target face mask using a target image fusion algorithm to obtain a target face image includes:
step 802, determine a first adjusted face region from the first adjusted face image based on the target face mask.
Wherein the first adjusted face region refers to a face region in the first adjusted face image.
Specifically, the server calculates the product of the mask value of each pixel point in the target face mask and the pixel value of each pixel point in the first adjustment face image, and obtains a first adjustment face area according to the product result.
And step 804, fusing the first adjusted face area to the position of the face area in the second face image to obtain the target face image.
Specifically, the server calculates the product of the inverse value of the mask value of each pixel point in the target face mask and the pixel value of each pixel point in the second face image, determines the position of the face region in the second face image according to the product result, and then fuses the first adjusted face region to the position of the face region in the second face image to obtain the target face image.
In one particular embodiment, the target face image is obtained using the following equation (2):
out (1-alpha) A + alpha B formula (2)
Wherein out refers to the output pixel value, α refers to the pixel value in the target face mask, and the value range is [0,1 ]. A is the pixel value in the second face image, and B is the pixel value in the first adjusted face image. When α is 0, the background region pixel value is output, when α is 1, the face region pixel value is output, and when 0< α <1, the pixel value is the blended pixel value.
In the above embodiment, the target face image is obtained by fusing the first adjusted face region to the face region position in the second face image, so that the target face image can be obtained conveniently and quickly.
In one embodiment, as shown in fig. 9, the step 704 of fusing the first adjusted face image and the second face image based on the target face mask using a target image fusion algorithm to obtain a target face image includes:
step 902, determining a region of interest from the first adjusted face image according to the target face mask, and calculating a first gradient field of the region of interest and a second gradient field of the second face image.
Wherein the region of interest refers to a face region in the first adjusted face image.
Specifically, the server determines a region of interest from the first adjusted face image according to the face region in the target face mask, and calculates a first gradient field of the region of interest and a second gradient field of the second face image using a difference operation. Therein, gradients of the region of interest in two directions may be calculated, and a first gradient field is derived from the gradients in the two directions. It is also possible to calculate gradients of the second face image in two directions, from which a second gradient field is derived.
And 904, determining a fusion gradient field according to the first gradient field and the second gradient field, and calculating a fusion divergence field by using the fusion gradient field.
The fusion gradient field refers to a gradient field corresponding to the target face image. The fusion divergence field refers to the divergence corresponding to the target face image, i.e., laplacian coordinates.
Specifically, the server overlays the first gradient field on the second gradient field to obtain a fused gradient field. And then the server calculates the partial derivatives of the gradients in the fusion gradient field to obtain a fusion divergence field. The server can respectively calculate the partial derivatives of the gradient in the fusion gradient field in two different directions, and then add the partial derivatives in the two different directions to obtain the fusion divergence field.
Step 906, determining a second fused pixel value based on the fused divergence field, and obtaining the target face image according to the second fused pixel value.
The second fused pixel value refers to the pixel value of each pixel point in the target face image.
Specifically, the server uses the fusion divergence field to construct a coefficient matrix according to a Poisson weight intersection equation, then a second fusion pixel value is obtained according to the coefficient matrix, and a target face image is obtained according to the second fusion pixel value.
In the above embodiment, the target face image is obtained by fusing the first adjusted face image and the second face image based on the target face mask using the target image fusion algorithm, and the obtained target face image can be diversified.
In one embodiment, the target face image is used to train a face detection model that is used to detect the authenticity of the face image.
Specifically, the server generates a large number of target face images by using the method for obtaining the target face image in each embodiment, and trains the target face images to obtain a face detection model, where the face detection model is used for detecting authenticity of the face image, where when the authenticity of the detected face image exceeds a preset threshold, the face image is obtained as an authentic face image, and when the authenticity of the detected face image does not exceed the preset threshold, the face image is obtained as a non-authentic face image, that is, a false face image.
In one embodiment, as shown in FIG. 10, the training of the face detection model includes the steps of:
step 1002 obtains a real face image dataset and a target face image dataset, each target face image in the target face image dataset generated using a different first real face image and second real face image in the real face image dataset, and taking the target face image dataset as the current face image dataset.
Wherein the real face image data set refers to an image data set composed of real face images.
Specifically, the server obtains the real face image data set, which may be obtained from a real face image database of a third party. Or by image acquisition of a real face. Simultaneously, the server generates respective target face images using different first and second real face images in the real face image dataset, wherein the respective different target face images may be generated using a random combination of different image processing algorithms, different color adjustment algorithms, and different image fusion algorithms for the first and second real face images. Different first and second real face images may also be used to generate respective different target face images using different image processing algorithms, different color adjustment algorithms, and different image fusion algorithms in random combination.
In a particular embodiment, the real face image dataset may be derived from a real face video provided in FaceForensic + + (a type of face image dataset). The target face image dataset is then generated using the real faces in FaceForensic + +.
And 1004, taking each real face image in the real face image data set as positive sample data, taking each current face image in the current face image data set as negative sample data, and training by using a deep neural network algorithm to obtain a current face detection model.
Specifically, the server takes each real face image in the real face image data set and each current face image in the current face image data set as the input of the model for training, and when the training completion condition is met, the current face detection model is obtained. The training completion condition comprises that the training reaches the maximum iteration number or the value of the loss function meets a preset threshold value condition. For example, the server uses an Xception (extension of inclusion deep convolution network) network as a network structure of the model, performs model training using cross-entropy as a loss function, and obtains the current face detection model when a training completion condition is reached. In this case, a network with a stronger expression ability may also be used as the network result of the model for training, for example, a ResNet101 deep residual network, a ResNet152 deep residual network, or the like is used.
Step 1006, obtaining the test face image data, testing the current face detection model by using the test face image data, obtaining the accuracy corresponding to the current face detection model, wherein the test face image data and the real face image data set are different data sets.
Specifically, the server may obtain test face image data from a third party database, the test face image data being a different dataset than the real face image dataset. And then testing the current face detection model by using the test face image data, wherein AUC (area under curve) and AP (average accuracy) can be used as evaluation indexes to obtain the accuracy of the current face detection model in detecting the authenticity of the face image.
Step 1008, determining whether the accuracy exceeds a preset accuracy threshold, and if so, executing step 1010 a. When the preset accuracy threshold is not exceeded, step 1010b is executed.
Step 1010b obtains an update target face image dataset comprising each target face image in the target face image dataset and each update target face image, each update target face image being regenerated using a different first real face image and second real face image in the real face image dataset. The update target face image data set is taken as the current face image data set, and the process returns to step 1004 for execution.
And step 1010a, taking the obtained current face detection model as a face detection model.
The preset accuracy threshold is an accuracy threshold for detecting the authenticity of the face image by a preset face detection model.
Specifically, when the accuracy does not exceed the preset accuracy threshold, it is indicated that the generalization capability of the trained model on other data sets is poor, at this time, the server acquires the updated target face image data set, and iteratively trains the current face detection model again by using the updated target face image data set. The updated target face image dataset includes a target face image used by previous training and a regenerated target face image. I.e., training the face detection model by enhancing the target face image in the training sample.
In the above embodiment, the target face image data set is obtained, and then the target face image data set and the real face image data set are used for training to obtain the face detection model, so that the target face image data set has abundant and diverse target face images, the generalization capability of the trained face detection model is improved, and then the face detection model is used for detecting the authenticity of the face image, so that the accuracy of detecting the authenticity of the face image can be improved.
In a specific embodiment, the test face image data is used to test the existing face intelligent model and the face detection model of the present application, and the obtained evaluation index data is shown in table 1 below,
table 1 evaluation index data table
Figure BDA0002602857960000221
Figure BDA0002602857960000231
The test data set 1 may be a celeb-DF (deep face extraction data set), and the test data set 2 may be a DFDC (deep false action Detection Challenge) data set. The evaluation indexes of the face detection model obtained through training after data enhancement in the application all obtain results better than those of the existing artificial intelligence model. The face detection model obviously improves the generalization performance of the model, so that the detection result is more accurate.
In one embodiment, after the obtained current face detection model is taken as the face detection model, the method further includes:
the method comprises the steps of obtaining a face image to be detected, inputting the face image to be detected into a face detection model for detection, obtaining a detection result, and generating alarm information when the detection result is a non-real face image.
The face image to be detected is the face image needing to be detected. And detecting whether the face image to be detected is a result of a real face image by using the detection test paper, wherein the result comprises results of a non-real face image and a real face image. The alarm information is used for reminding that the face image to be detected is not authentic, and the face image to be detected is a non-authentic face image.
Specifically, the server acquires a face image to be detected, where the face image to be detected may be a face image uploaded by a user, a face image recognized by the server from various videos, a face image stored in a database in the server, or the like. The server deploys the trained face detection model in advance, and then inputs the face image to be detected into the face detection model for detection, so that the detection result output by the face detection model is obtained. And when the detection result is that the face image to be detected is the real face image, no processing is performed. And when the detection result is that the face image to be detected is a non-real face image, generating alarm information, and sending the alarm information to the management terminal for displaying so that the management terminal performs subsequent processing.
In the embodiment, the face image to be detected is detected by using the face detection model to obtain the detection result, and when the detection result is the non-real face image, the alarm information is generated, so that the accuracy of the face detection model for detecting the non-real face image is improved.
In a specific embodiment, as shown in fig. 11, the face image processing method specifically includes the steps of:
step 1102, acquiring a real face image dataset, and randomly selecting a first face image and a second face image from the real face image dataset;
and 1104, calculating the weight of the pixel points in the first face image by using a Gaussian function to obtain a pixel point fuzzy weight matrix. And calculating to obtain the fuzzy pixel value of the pixel point according to the original pixel value of the pixel point in the first face image and the fuzzy weight matrix of the pixel point, and generating a first updated face image. I.e. gaussian blur.
Step 1106, obtaining a compression ratio randomly, and compressing the first updated face image by using the compression ratio to obtain a second updated face image. I.e. image compression.
Step 1108, generating a gaussian noise value, and adding the gaussian noise value to the pixel value of the second updated face image to obtain a third updated face image. I.e. random noise addition.
In this embodiment, the server may randomly select a step to be executed from steps 1104, 1106 and 1108 when generating the target face image, obtain a corresponding updated face image, and perform subsequent processing using the corresponding updated face image. For example, the server may execute step 1104, or execute step 1106, or execute step 1108, or execute step 1104 and step 1106, or execute step 1106 and step 1108, and the like to obtain a corresponding execution result, where the execution result of the previous step is processed when the step is executed. Even if the generated updated face image has an effect against the generation network, wherein one effect may be provided, it may have a plurality of effects, as shown in fig. 12, which is a schematic diagram of a method of simulating the effect of the generation network against a random selection.
And 1110, randomly obtaining a target color adjustment algorithm identifier, calling a target color adjustment algorithm according to the target color adjustment algorithm identifier, and adjusting the color distribution of the third updated face image according to the color distribution of the second face image based on the target color adjustment algorithm to obtain a first adjusted face image. As shown in fig. 13, the schematic diagram shows names of randomly selectable target color adjustment algorithms, and a target image fusion algorithm is randomly selected from the target color adjustment algorithms included in fig. 13.
Step 1112, extracting face key points in the first face image, determining a face region of the first face image according to the face key points, randomly adjusting positions of the face key points in the face region of the first face image to obtain a deformed face region, and generating a target face mask according to the deformed face region. Fig. 14 is a schematic diagram of mask generation and random deformation.
Step 1114, performing face occlusion detection on the second face image to obtain a face occlusion region, calculating a difference between a mask value of a pixel point in the target face mask and a mask value of a pixel point in the face occlusion region, and obtaining an adjusted face mask according to the mask adjustment value by using the difference as a mask adjustment value.
Step 1116, randomly acquiring a target image fusion algorithm identifier, calling a target image fusion algorithm according to the target image fusion algorithm identifier, and fusing the first adjusted face image and the second face image by using the target image fusion algorithm based on a target face mask to obtain a target face image. As shown in fig. 15, the name of the image fusion algorithm that can be randomly selected is shown, and the name of the target image fusion algorithm is randomly selected from the image fusion algorithms included in fig. 15.
In this embodiment, the server repeatedly executes the above steps, and each time when executing, randomly selects a corresponding one of the methods from fig. 12, fig. 13, fig. 14, and fig. 15 to execute the corresponding step, so as to ensure that each target face image with diversity is generated, and obtain the target face image dataset.
The application also provides an application scene, and the application scene applies the face image processing method. Specifically, the application of the facial image processing method to the application scene is as follows:
as shown in fig. 16, a schematic diagram of a human face image processing framework is provided, specifically: the server acquires a real face image A and a real face image B. Processing the real face image A to generate a first updated face image with non-real face image characteristics; and adjusting the color distribution of the first updated face image according to the color distribution of the real face image B to obtain a first adjusted face image. And generating a corresponding face mask according to the real face image A, and then deforming the face mask to obtain a deformed target face mask. And fusing the first adjusted face image and the real face image B according to the target face mask to obtain a target face image. By using the framework shown in fig. 16 to generate a plurality of target face images, a schematic diagram of a part of the generated target face images is shown in fig. 17, wherein the generated target face images include a synthetic face image with a higher realism and a synthetic face image with a lower realism, and the synthetic face images shown in fig. 17 are all colored images. And then training by using a large number of generated target face images and real face images to obtain a face authenticity detection model. Then, the face authenticity detection model is deployed in a face recognition payment platform, as shown in fig. 18, which is an application environment schematic diagram of a face image processing method applied to the face recognition payment platform, where the face image processing method includes a user terminal 1802, a platform server 1804, and a monitoring terminal 1806. That is, when the user terminal 1802 performs face payment, a face image is collected through a camera, the face image is transmitted to the platform server 1804 through a network, the platform server 1804 performs authenticity detection on the collected face image through a deployed face authenticity detection model to obtain a detection result, when the detection result is an unreal face, alarm information is generated to indicate that face identification fails, the face is an unreal face, the alarm information is sent to the monitor terminal 1806 to be displayed, and payment failure information is sent to the user terminal 1802 to be displayed. The safety of face payment can be improved by identifying the authenticity of the face collected by the monitoring equipment.
The application further provides an application scene, and the application scene applies the face image processing method. Specifically, the application of the facial image processing method to the application scene is as follows:
and acquiring a first face image and a second face image, wherein the first face image and the second face image are images containing real faces. And processing the first face image to generate a first updated face image with the characteristics of the unreal face image. And adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image. And acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming the face area of the first face image. And fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image. A large number of target face images are generated through the method, a face changing detection model is obtained through training by using a large number of target face images and a real face image data set, and the face changing detection model is deployed into an internet video media platform for use. When the internet video media platform obtains a video uploaded by a user, a face image to be answered is obtained from the video, the face image to be detected is input into a face changing detection model, a face changing detection result is obtained, the face changing detection result comprises a face image after face changing and a face image without face changing, the face image after face changing is a non-real face image, and the face image without face changing is a real face image. And when the face image to be detected is the face image after face changing and the face image after face changing is identified to infringe the portrait right, forbidding to release the video uploaded by the user, and returning the reason for forbidding video release to the user.
It should be understood that, although the steps in the flowcharts of fig. 2-4 and 6-11 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-4 and 6-11 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or in alternation with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 19, a facial image processing apparatus 1900 is provided, which may be a part of a computer device using software modules or hardware modules, or a combination of both, and specifically includes: an image acquisition module 1902, an image processing module 1904, a color adjustment module 1906, a mask acquisition module 1908, and an image fusion module 1910, wherein:
an image acquisition module 1902 for acquiring a first face image and a second face image, the first face image and the second face image being images containing a real face;
an image processing module 1904, configured to process the first face image to generate a first updated face image with non-real face image characteristics;
a color adjustment module 1906, configured to adjust a color distribution of the first updated face image according to a color distribution of the second face image, to obtain a first adjusted face image;
a mask obtaining module 1908, configured to obtain a target face mask of the first face image, where the target face mask is generated by randomly deforming a face region of the first face image;
an image fusion module 1910 configured to fuse the first adjusted face image and the second face image according to the target face mask to obtain a target face image.
In one embodiment, the image processing module 1904 includes:
the Gaussian blur unit is used for calculating the weight of the pixel points in the first face image by using a Gaussian function to obtain a pixel point blur weight matrix; and calculating to obtain the fuzzy pixel value of the pixel point according to the original pixel value of the pixel point in the first face image and the fuzzy weight matrix of the pixel point, and generating a first updated face image.
In one embodiment, the image processing module 1904 includes:
the image compression unit is used for acquiring a compression rate, and compressing the first face image by using the compression rate to obtain a compressed first face image; and taking the compressed first face image as a first updated face image with the characteristics of the non-real face image.
In one embodiment, the image processing module 1904 includes:
and the noise condition unit is used for generating a Gaussian noise value, and adding the Gaussian noise value into the pixel value of the first face image to obtain a first updated face image with the characteristics of the unreal face image.
In one embodiment, the mask acquisition module 1908 includes:
the key point extracting unit is used for extracting face key points in the first face image and determining a face area of the first face image according to the face key points;
and the calling unit is used for randomly adjusting the positions of the key points of the face in the face area of the first face image to obtain a deformed face area, and generating a target face mask according to the deformed face area.
In one embodiment, the facial image processing apparatus 1900 further includes:
the occlusion detection module is used for carrying out face occlusion detection on the second face image to obtain a face occlusion area;
the mask adjusting module is used for adjusting the target face mask according to the face shielding area to obtain an adjusted face mask;
the image fusion module 1910 is further configured to fuse the first adjusted face image and the second face image according to the adjusted face mask to obtain a target face image.
In one embodiment, the mask adjustment module is further configured to calculate a difference between a mask value of a pixel in the target face mask and a shielding value of a pixel in the face shielding region, and use the difference as the mask adjustment value; and adjusting the face mask according to the mask adjusting value.
In one embodiment, the color adjustment module 1906 is further configured to obtain a target color adjustment algorithm identifier, and invoke a target color adjustment algorithm according to the target color adjustment algorithm identifier, where the target color adjustment algorithm includes at least one of a color migration algorithm and a color matching algorithm; and adjusting the color distribution of the first updated face image to be consistent with the color distribution of the second face image based on a target color adjustment algorithm to obtain a first adjusted face image.
In one embodiment, the image fusion module 1910 includes:
the calling unit is used for acquiring a target image fusion algorithm identifier and calling a target image fusion algorithm according to the target image fusion algorithm identifier; the target image fusion algorithm comprises at least one of a transparent mixing algorithm, a Poisson fusion algorithm and a neural network algorithm;
and the fusion unit is used for fusing the first adjusted face image and the second face image based on the target face mask by using a target image fusion algorithm to obtain a target face image.
In one embodiment, the fusion unit is further configured to determine a first adjusted face region from the first adjusted face image based on the target face mask; and fusing the first adjusted face area to the position of the face area in the second face image to obtain the target face image.
In one embodiment, the fusion unit is further configured to determine a region of interest from the first adjusted face image according to the face mask, calculate a first gradient field of the region of interest and a second gradient field of the second face image; determining a fusion gradient field according to the first gradient field and the second gradient field, and calculating a fusion divergence field by using the fusion gradient field; and determining a second fusion pixel value based on the fusion divergence field, and obtaining the target face image according to the second fusion pixel value.
In one embodiment, the facial image processing apparatus 1900 further includes:
a data acquisition module for acquiring a real face image dataset and a target face image dataset, each target face image in the target face image dataset being generated using a different first and second real face image in the real face image dataset, the target face image dataset being taken as the current face image dataset.
The model training module is used for taking each real face image in the real face image data set as positive sample data, taking each current face image in the current face image data set as negative sample data, and training by using a deep neural network algorithm to obtain a current face detection model;
the model testing module is used for acquiring testing face image data, testing the current face detection model by using the testing face image data to obtain the corresponding accuracy of the current face detection model, and enabling the testing face image data and the real face image data set to be different data sets;
an update data acquisition module for acquiring an update target face image dataset when the accuracy is less than a preset accuracy threshold, the update target face image dataset comprising each target face image and each update target face image in the target face image dataset, each update target face image being regenerated using a different first real face image and second real face image in the real face image dataset;
and the iteration loop module is used for taking the updated target face image data set as a current face image data set, returning to the step of taking each real face image in the real face image data set as positive sample data, taking each current face image in the current face image data set as negative sample data, training by using a deep neural network algorithm, obtaining a current face detection model, and executing the step until the accuracy exceeds a preset accuracy threshold value, and taking the obtained current face detection model as the face detection model.
In one embodiment, the facial image processing apparatus 1900 further includes:
the image detection module is used for acquiring a face image to be detected, inputting the face image to be detected into the face detection model for detection to obtain a detection result, and generating alarm information when the detection result is a non-real face image.
For specific limitations of the facial image processing apparatus, reference may be made to the above limitations of the facial image processing method, which are not described herein again. The respective modules in the above-described face image processing apparatus may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 20. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used to store target face image data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a face image processing method.
Those skilled in the art will appreciate that the architecture shown in fig. 20 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A method for processing a facial image, the method comprising:
acquiring a first face image and a second face image, the first face image and the second face image being images containing a real face;
processing the first face image to generate a first updated face image having non-real face image characteristics;
adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image;
acquiring a target face mask of the first face image, wherein the target face mask is generated by randomly deforming a face area of the first face image;
and fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image.
2. The method of claim 1, wherein the processing of the first face image to generate a first updated face image having non-real face image characteristics comprises:
calculating the weight of a pixel point in the first face image by using a Gaussian function to obtain a pixel point fuzzy weight matrix;
and calculating to obtain the fuzzy pixel value of the pixel point according to the original pixel value of the pixel point in the first face image and the fuzzy weight matrix of the pixel point, and generating the first updated face image.
3. The method of claim 1, wherein the processing of the first face image to generate a first updated face image having non-real face image characteristics comprises:
obtaining a compression ratio, and compressing the first face image by using the compression ratio to obtain a compressed first face image;
and taking the compressed first face image as the first updated face image with the characteristics of the non-real face image.
4. The method of claim 1, wherein obtaining a target face mask of the first face image, the target face mask generated by randomly deforming a face region of the first face image, comprises:
extracting face key points in the first face image, and determining a face area of the first face image according to the face key points;
and randomly adjusting the positions of the key points of the face in the face area of the first face image to obtain a deformed face area, and generating a target face mask according to the deformed face area.
5. The method according to claim 1, further comprising, after the obtaining a target face mask of the first face image, the target face mask being generated by randomly deforming a face region of the first face image:
carrying out face shielding detection on the second face image to obtain a face shielding area;
adjusting the target face mask according to the face shielding area to obtain an adjusted face mask;
the fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image, including:
and fusing the first adjusted face image and the second face image according to the adjusted face mask to obtain a target face image.
6. The method according to claim 5, wherein said adjusting the target face mask according to the face occlusion region to obtain an adjusted face mask comprises:
calculating a difference value between a pixel mask value in the target face mask and a pixel shielding value in the face shielding area, and taking the difference value as the mask adjustment value;
and obtaining the adjusted face mask according to the mask adjusting value.
7. The method of claim 1, wherein adjusting the color distribution of the first updated face image according to the color distribution of the second face image, resulting in a first adjusted face image, comprises:
acquiring a target color adjustment algorithm identifier, and calling a target color adjustment algorithm according to the target color adjustment algorithm identifier, wherein the target color adjustment algorithm comprises at least one of a color migration algorithm and a color matching algorithm;
and adjusting the color distribution of the first updated face image to be consistent with the color distribution of the second face image based on the target color adjustment algorithm to obtain a first adjusted face image.
8. The method of claim 1, wherein fusing the first adjusted facial image with the second facial image according to the target facial mask to obtain a target facial image comprises:
acquiring a target image fusion algorithm identifier, and calling a target image fusion algorithm according to the target image fusion algorithm identifier; the target image fusion algorithm comprises at least one of a transparent mixing algorithm, a Poisson fusion algorithm and a neural network algorithm;
and fusing the first adjusted face image and the second face image based on the target face mask by using the target image fusion algorithm to obtain a target face image.
9. The method of claim 8, wherein said fusing the first adjusted face image with the second face image based on the target face mask using the target image fusion algorithm to obtain a target face image comprises:
determining a first adjusted face region from the first adjusted face image according to the target face mask;
and fusing the first adjusted face area to the position of the face area in the second face image to obtain the target face image.
10. The method of claim 8, wherein said fusing the first adjusted face image with the second face image based on the target face mask using the target image fusion algorithm to obtain a target face image comprises:
determining a region of interest from the first adjusted face image according to the target face mask, calculating a first gradient field of the region of interest and a second gradient field of the second face image;
determining a fusion gradient field according to the first gradient field and the second gradient field, and calculating a fusion divergence field by using the fusion gradient field;
and determining a second fusion pixel value based on the fusion divergence field, and obtaining the target face image according to the second fusion pixel value.
11. The method of claim 1, wherein the target facial image is used to train a face detection model used to detect authenticity of the facial image.
12. The method of claim 11, wherein the training of the face detection model comprises the steps of:
obtaining a real face image dataset and a target face image dataset, each target face image in the target face image dataset being generated using a different first and second real face image in the real face image dataset;
taking the target face image data set as a current face image data set, taking each real face image in the real face image data set as positive sample data, taking each current face image in the current face image data set as negative sample data, and training by using a deep neural network algorithm to obtain a current face detection model;
acquiring test face image data, and testing the current face detection model by using the test face image data to obtain the accuracy corresponding to the current face detection model, wherein the test face image data and the real face image data set are different data sets;
when the accuracy is less than a preset accuracy threshold, obtaining an update target face image dataset comprising each target face image and each update target face image in the target face image dataset, the each update target face image being regenerated using a different first real face image and second real face image in the real face image dataset;
and taking the updated target face image data set as a current face image data set, returning to take each real face image in the real face image data set as positive sample data, taking each current face image in the current face image data set as negative sample data, training by using a deep neural network algorithm, and executing the step of obtaining a current face detection model until the accuracy exceeds the preset accuracy threshold value, and taking the obtained current face detection model as the face detection model.
13. A facial image processing apparatus, characterized in that the apparatus comprises:
an image acquisition module for acquiring a first face image and a second face image, the first face image and the second face image being images containing a real face;
the image processing module is used for processing the first face image to generate a first updated face image with the characteristics of a non-real face image;
the color adjusting module is used for adjusting the color distribution of the first updated face image according to the color distribution of the second face image to obtain a first adjusted face image;
a mask obtaining module, configured to obtain a target face mask of the first face image, where the target face mask is generated by randomly deforming a face region of the first face image;
and the image fusion module is used for fusing the first adjusted face image and the second face image according to the target face mask to obtain a target face image, wherein the target face image is used for training a face detection model, and the face detection model is used for detecting the authenticity of the face image.
14. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 12.
15. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN202010730209.9A 2020-07-27 2020-07-27 Face image processing method, device, computer equipment and storage medium Active CN111754396B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010730209.9A CN111754396B (en) 2020-07-27 2020-07-27 Face image processing method, device, computer equipment and storage medium
PCT/CN2021/100912 WO2022022154A1 (en) 2020-07-27 2021-06-18 Facial image processing method and apparatus, and device and storage medium
US17/989,169 US20230085605A1 (en) 2020-07-27 2022-11-17 Face image processing method, apparatus, device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010730209.9A CN111754396B (en) 2020-07-27 2020-07-27 Face image processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111754396A true CN111754396A (en) 2020-10-09
CN111754396B CN111754396B (en) 2024-01-09

Family

ID=72712070

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010730209.9A Active CN111754396B (en) 2020-07-27 2020-07-27 Face image processing method, device, computer equipment and storage medium

Country Status (3)

Country Link
US (1) US20230085605A1 (en)
CN (1) CN111754396B (en)
WO (1) WO2022022154A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112383765A (en) * 2020-11-10 2021-02-19 中移雄安信息通信科技有限公司 VR image transmission method and device
CN112541926A (en) * 2020-12-15 2021-03-23 福州大学 Ambiguous pixel optimization segmentation method based on improved FCN and Densenet
CN112598580A (en) * 2020-12-29 2021-04-02 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo
CN113344832A (en) * 2021-05-28 2021-09-03 杭州睿胜软件有限公司 Image processing method and device, electronic equipment and storage medium
WO2022022154A1 (en) * 2020-07-27 2022-02-03 腾讯科技(深圳)有限公司 Facial image processing method and apparatus, and device and storage medium
CN114140319A (en) * 2021-12-09 2022-03-04 北京百度网讯科技有限公司 Image migration method and training method and device of image migration model
CN115187446A (en) * 2022-05-26 2022-10-14 北京健康之家科技有限公司 Face changing video generation method and device, computer equipment and readable storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085701B (en) * 2020-08-05 2024-06-11 深圳市优必选科技股份有限公司 Face ambiguity detection method and device, terminal equipment and storage medium
US11941844B2 (en) * 2020-08-05 2024-03-26 Ubtech Robotics Corp Ltd Object detection model generation method and electronic device and computer readable storage medium using the same
CN114724218A (en) * 2022-04-08 2022-07-08 北京中科闻歌科技股份有限公司 Video detection method, device, equipment and medium
CN115861122A (en) * 2022-12-26 2023-03-28 北京字跳网络技术有限公司 Face image processing method and device, computer equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2315430A2 (en) * 2009-10-23 2011-04-27 Sony Corporation Image processing apparatus and image processing method
US20160127359A1 (en) * 2014-11-01 2016-05-05 RONALD Henry Minter Compliant authentication based on dynamically-updated crtedentials
CN107392142A (en) * 2017-07-19 2017-11-24 广东工业大学 A kind of true and false face identification method and its device
CN109003282A (en) * 2018-07-27 2018-12-14 京东方科技集团股份有限公司 A kind of method, apparatus and computer storage medium of image procossing
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium
CN109829930A (en) * 2019-01-15 2019-05-31 深圳市云之梦科技有限公司 Face image processing process, device, computer equipment and readable storage medium storing program for executing
CN110399849A (en) * 2019-07-30 2019-11-01 北京市商汤科技开发有限公司 Image processing method and device, processor, electronic equipment and storage medium
CN110458781A (en) * 2019-08-14 2019-11-15 北京百度网讯科技有限公司 Method and apparatus for handling image
CN111242852A (en) * 2018-11-29 2020-06-05 奥多比公司 Boundary aware object removal and content filling
CN111325657A (en) * 2020-02-18 2020-06-23 北京奇艺世纪科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111353392A (en) * 2020-02-18 2020-06-30 腾讯科技(深圳)有限公司 Face change detection method, device, equipment and storage medium
CN111368796A (en) * 2020-03-20 2020-07-03 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978754A (en) * 2017-12-28 2019-07-05 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111754396B (en) * 2020-07-27 2024-01-09 腾讯科技(深圳)有限公司 Face image processing method, device, computer equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2315430A2 (en) * 2009-10-23 2011-04-27 Sony Corporation Image processing apparatus and image processing method
US20160127359A1 (en) * 2014-11-01 2016-05-05 RONALD Henry Minter Compliant authentication based on dynamically-updated crtedentials
CN107392142A (en) * 2017-07-19 2017-11-24 广东工业大学 A kind of true and false face identification method and its device
CN109003282A (en) * 2018-07-27 2018-12-14 京东方科技集团股份有限公司 A kind of method, apparatus and computer storage medium of image procossing
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium
CN111242852A (en) * 2018-11-29 2020-06-05 奥多比公司 Boundary aware object removal and content filling
CN109829930A (en) * 2019-01-15 2019-05-31 深圳市云之梦科技有限公司 Face image processing process, device, computer equipment and readable storage medium storing program for executing
CN110399849A (en) * 2019-07-30 2019-11-01 北京市商汤科技开发有限公司 Image processing method and device, processor, electronic equipment and storage medium
CN110458781A (en) * 2019-08-14 2019-11-15 北京百度网讯科技有限公司 Method and apparatus for handling image
CN111325657A (en) * 2020-02-18 2020-06-23 北京奇艺世纪科技有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111353392A (en) * 2020-02-18 2020-06-30 腾讯科技(深圳)有限公司 Face change detection method, device, equipment and storage medium
CN111368796A (en) * 2020-03-20 2020-07-03 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李旭嵘;于鲲;: "一种基于双流网络的Deepfakes检测技术", 信息安全学报, no. 02, pages 89 - 96 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022022154A1 (en) * 2020-07-27 2022-02-03 腾讯科技(深圳)有限公司 Facial image processing method and apparatus, and device and storage medium
CN112383765A (en) * 2020-11-10 2021-02-19 中移雄安信息通信科技有限公司 VR image transmission method and device
CN112541926A (en) * 2020-12-15 2021-03-23 福州大学 Ambiguous pixel optimization segmentation method based on improved FCN and Densenet
CN112541926B (en) * 2020-12-15 2022-07-01 福州大学 Ambiguous pixel optimization segmentation method based on improved FCN and Densenet
CN112598580A (en) * 2020-12-29 2021-04-02 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo
CN112598580B (en) * 2020-12-29 2023-07-25 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo
CN113344832A (en) * 2021-05-28 2021-09-03 杭州睿胜软件有限公司 Image processing method and device, electronic equipment and storage medium
CN114140319A (en) * 2021-12-09 2022-03-04 北京百度网讯科技有限公司 Image migration method and training method and device of image migration model
CN115187446A (en) * 2022-05-26 2022-10-14 北京健康之家科技有限公司 Face changing video generation method and device, computer equipment and readable storage medium

Also Published As

Publication number Publication date
US20230085605A1 (en) 2023-03-16
CN111754396B (en) 2024-01-09
WO2022022154A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN110490212B (en) Molybdenum target image processing equipment, method and device
CN110866509B (en) Action recognition method, device, computer storage medium and computer equipment
CN112801057B (en) Image processing method, image processing device, computer equipment and storage medium
CN112084917B (en) Living body detection method and device
CN110503076B (en) Video classification method, device, equipment and medium based on artificial intelligence
CN111680672B (en) Face living body detection method, system, device, computer equipment and storage medium
CN110728209A (en) Gesture recognition method and device, electronic equipment and storage medium
CN111814902A (en) Target detection model training method, target identification method, device and medium
CN111476806B (en) Image processing method, image processing device, computer equipment and storage medium
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN111768336A (en) Face image processing method and device, computer equipment and storage medium
CN111241989A (en) Image recognition method and device and electronic equipment
CN114092833B (en) Remote sensing image classification method and device, computer equipment and storage medium
CN111553267A (en) Image processing method, image processing model training method and device
CN113705290A (en) Image processing method, image processing device, computer equipment and storage medium
CN110222718A (en) The method and device of image procossing
CN115249306B (en) Image segmentation model training method, image processing device and storage medium
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN115050064A (en) Face living body detection method, device, equipment and medium
CN113569598A (en) Image processing method and image processing apparatus
CN117854155B (en) Human skeleton action recognition method and system
CN114677611B (en) Data identification method, storage medium and device
CN113570615A (en) Image processing method based on deep learning, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40030053

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant