CN114494002B - AI face changing video-based original face image intelligent restoration method and system - Google Patents

AI face changing video-based original face image intelligent restoration method and system Download PDF

Info

Publication number
CN114494002B
CN114494002B CN202210320804.4A CN202210320804A CN114494002B CN 114494002 B CN114494002 B CN 114494002B CN 202210320804 A CN202210320804 A CN 202210320804A CN 114494002 B CN114494002 B CN 114494002B
Authority
CN
China
Prior art keywords
face
facial
changing
video
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210320804.4A
Other languages
Chinese (zh)
Other versions
CN114494002A (en
Inventor
李美玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silicon Based Kunshan Intelligent Technology Co ltd
Original Assignee
Guangzhou Gongping Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Gongping Technology Co ltd filed Critical Guangzhou Gongping Technology Co ltd
Priority to CN202210320804.4A priority Critical patent/CN114494002B/en
Publication of CN114494002A publication Critical patent/CN114494002A/en
Application granted granted Critical
Publication of CN114494002B publication Critical patent/CN114494002B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an original face image intelligent restoration method and system based on AI face changing video, wherein the method comprises the following steps: s1: acquiring an AI face changing video set of an object to be restored; s2: extracting common features of the AI face changing video set, and determining facial features of the object to be restored in a preset facial state based on the common features; s3: restoring an original face image corresponding to the object to be restored in a preset face state based on the face features and a preset skull model; the face feature extraction method is used for extracting the common features of a large number of AI face changing videos of the object to be restored based on the face features, obtaining the face features of the object to be restored in the preset face state, restoring the original face image of the object to be restored in the preset face state based on the face features, and reducing the information authenticity crisis caused by the AI face changing videos.

Description

AI face changing video-based original face image intelligent restoration method and system
Technical Field
The invention relates to the technical field of image processing, in particular to an original face image intelligent restoration method and system based on an AI face changing video.
Background
At present, along with the development of information technology and artificial intelligence, an AI face changing APP is popularized to the basic level public, and due to the appearance of the AI video face changing technology, the authenticity, authority and the like of video content are greatly reduced, and the real and virtual boundaries of the video content are increasingly blurred. Under the technical support, an operator can randomly replace, synthesize and tamper the information content of static pictures or dynamic videos, so that false video information is inundated, and a method for restoring the original face image based on the AI face-changing video should be developed.
Therefore, the invention provides an original face image intelligent restoration method and system based on an AI face changing video.
Disclosure of Invention
The invention provides an original face image intelligent restoration method and system based on AI face change videos, which are used for extracting common features of a large number of AI face change videos of an object to be restored to obtain facial features of the object to be restored in a preset facial state, and restoring an original face image corresponding to the object to be restored in the preset facial state based on the facial features, so that the truth crisis of information caused by the AI face change videos is reduced.
The invention provides an AI face changing video-based original face image intelligent restoration method, which comprises the following steps:
s1: acquiring an AI face changing video set of an object to be restored;
s2: extracting common features of the AI face changing video set, and determining facial features of the object to be restored in a preset facial state based on the common features;
s3: and restoring an original face image corresponding to the object to be restored in a preset face state based on the face features and a preset skull model.
Preferably, in the method for intelligently restoring an original face image based on an AI face changing video, S1: acquiring an AI face-changing video set of an object to be restored, comprising the following steps:
s101: acquiring all face changing videos of an object to be restored;
s102: classifying all face-changing videos based on the gender of the corresponding face-changing person in the face-changing videos to obtain a first face-changing video subset corresponding to each gender;
s103: classifying the first face changing video subset into a second face changing video subset corresponding to each face changing person based on the corresponding face changing person in the face changing video;
s104: and summarizing all the second face changing video subsets to obtain the AI face changing video set of the object to be restored.
Preferably, in the method for intelligently restoring an original face image based on an AI face changing video, S2: extracting the common characteristics of the AI face changing video set, and determining the facial characteristics of the object to be restored under a preset facial state based on the common characteristics, wherein the method comprises the following steps:
extracting the common characteristics of the five sense organs corresponding to the AI face changing video set;
extracting facial muscle movement common characteristics corresponding to the AI face changing video set;
determining a facial muscle arch height set and a facial organ coordinate set of the object to be restored in a preset facial state as corresponding facial features based on the common features;
wherein the common features include: common features of five sense organs and common features of facial muscle movements.
Preferably, the method for intelligently restoring the original face image based on the AI face change video extracts the common feature of five sense organs corresponding to the AI face change video set, and includes:
extracting a first facial feature image contained in a first facial feature changing video subset corresponding to each gender contained in the AI facial feature changing video set, determining a first facial feature coordinate subset corresponding to the first facial feature image, obtaining a first facial feature coordinate set corresponding to each gender based on all the first facial feature coordinate subsets, and performing common feature extraction on the first facial feature coordinate set to obtain a first facial feature common feature corresponding to each gender;
extracting second facial features images contained in a second facial changing video subset corresponding to each facial changing person contained in the AI facial changing video set, determining a second facial feature coordinate subset corresponding to the second facial feature images, obtaining a second facial feature coordinate set corresponding to each facial changing person based on all the second facial feature coordinate subsets, and performing common feature extraction on the second facial feature coordinate set to obtain second facial feature common features corresponding to each facial changing person;
wherein the five-functional commonality features include: the first and second five-functional commonality characteristics.
Preferably, the method for intelligently restoring the original face image based on the AI face change video extracts the facial muscle movement commonality characteristics corresponding to the AI face change video set, and includes:
extracting a first facial muscle movement common characteristic corresponding to the gender and the expression from the AI face changing video set;
extracting second facial muscle motion common characteristics corresponding to the expressions corresponding to the face-changing persons from the AI face-changing video set;
wherein the facial muscle movement commonality characteristics comprise: the first facial muscle motion commonality characteristic and the second facial muscle motion commonality characteristic.
Preferably, the method for intelligently restoring an original face image based on an AI face changing video extracts a first facial muscle movement commonality characteristic corresponding to a gender and an expression from the AI face changing video set, and includes:
determining a first face changing video contained in a first face changing video subset corresponding to each gender contained in the AI face changing video set;
performing expression analysis on the first face changing video to obtain a first face changing video segment set corresponding to each expression of each gender;
determining a first face changing image contained in the first face changing video segment set;
determining each local muscle region contained in the first face-changed image based on a standard facial muscle profile;
and dynamically analyzing each local muscle area contained in the first face changing video segment set to obtain a first facial muscle motion common characteristic corresponding to the gender and the expression.
Preferably, the method for intelligently restoring an original face image based on an AI face change video dynamically analyzes each local muscle area included in a first face change video segment set to obtain a first face muscle motion commonality characteristic corresponding to a gender-corresponding expression includes:
extracting a dynamic video segment corresponding to each local muscle region from the first face changing video segment set;
determining a preset Hessian matrix corresponding to each preset characteristic point in a preset characteristic point set corresponding to the local muscle area;
determining a first Hessian matrix corresponding to each first coordinate point contained in a local muscle area corresponding to each first video frame in the dynamic video segment;
calculating a corresponding similarity value between the preset Hessian matrix and each first Hessian matrix;
taking a second coordinate point corresponding to the first Hessian matrix corresponding to the minimum similarity value as a corresponding first characteristic point contained in a corresponding local muscle area by a corresponding preset characteristic point;
determining all first characteristic points contained in the local muscle area to obtain a corresponding first characteristic point set;
carrying out binarization processing on each second video frame in the first face changing video to obtain a corresponding third video frame;
performing light and shadow feature extraction on the third video frame to obtain a corresponding first light and shadow feature;
determining a light and shadow conversion matrix between the first light and shadow feature and a corresponding preset light and shadow feature under a preset light and shadow condition;
converting the corresponding local muscle area in the corresponding first video frame into a corresponding standard image under a preset light and shadow condition based on the light and shadow conversion matrix;
dividing a local muscle region contained in the first video frame into a plurality of shadow analysis regions based on the first set of feature points;
sequencing the light and shadow analysis regions according to the frame sequence corresponding to the dynamic video segment to obtain a corresponding light and shadow analysis region change sequence;
converting the light and shadow analysis region change sequence into a corresponding local muscle arch height sequence;
and summarizing the local muscle arch height sequence to obtain a first facial muscle movement commonality characteristic corresponding to the gender-corresponding expression.
Preferably, the method for intelligently restoring an original face image based on an AI face changing video extracts a second facial muscle movement commonality characteristic corresponding to a corresponding expression of a face changing person from the AI face changing video set, and includes:
determining second face changing videos contained in a second face changing video sub-set corresponding to each face changing person contained in the AI face changing video set;
performing expression analysis on the second face changing video to obtain a second face changing video segment set corresponding to each expression of each face changing person;
determining a second face changing image contained in the second face changing video segment set;
determining each local muscle region contained in the second face-changed image based on a standard facial muscle profile;
and dynamically analyzing each local muscle area contained in the second face changing video segment set to obtain a second facial muscle motion common characteristic corresponding to the corresponding expression of the face changing person.
Preferably, the method for intelligently restoring the original face image based on the AI face changing video determines, based on the common features, a facial muscle arch height set and a facial feature coordinate set of the object to be restored in a preset face state as corresponding facial features, and includes:
performing difference comparison on the common characteristic of the first five sense organs and the characteristic of the first standard five sense organs corresponding to the corresponding gender to obtain the characteristic deviation of the first five sense organs corresponding to the corresponding gender;
performing difference comparison on the first facial muscle motion common characteristics and first standard facial muscle motion characteristics corresponding to the gender-corresponding expression to obtain first facial muscle motion characteristic deviation corresponding to the gender-corresponding expression;
summarizing the first facial muscle movement characteristic deviation corresponding to each gender and all the corresponding first facial muscle movement characteristic deviations to obtain a first total deviation corresponding to each gender;
taking the sex corresponding to the first total deviation as the restoring sex corresponding to the object to be restored;
determining a first face-changing person corresponding to the gender reduction;
performing difference comparison on a second facial feature common characteristic corresponding to the first face-changing person in a preset face state and a second standard facial feature corresponding to the first face-changing person in the preset face state to obtain a second facial feature deviation corresponding to the first face-changing person in the preset face state;
carrying out common feature extraction on second facial feature deviations corresponding to all first face changing persons to obtain corresponding common features of the facial feature deviations of the object to be restored in a preset face state;
determining a coordinate set of the five sense organs corresponding to the object to be reduced based on the common features of the deviation of the five sense organs and the second standard features of the five sense organs;
comparing a difference between a second face muscle motion common characteristic corresponding to the first face-changing person in a preset face state and a second standard face muscle motion characteristic corresponding to the first face-changing person in the preset face state to obtain a first face muscle motion characteristic deviation corresponding to the first face-changing person in the preset face state;
performing common feature extraction on the first face muscle movement feature deviations corresponding to all the first face changing persons to obtain the common feature of the face muscle movement deviation corresponding to the object to be restored in a preset face state;
determining a facial muscle arch height set corresponding to the object to be restored based on the facial muscle movement deviation commonality characteristics and the second standard facial muscle movement characteristics;
and taking the facial muscle arch height set and the facial organ coordinate set of the object to be restored in a preset facial state as corresponding facial features.
The invention provides an AI face-changing video-based original face image intelligent restoration system, which comprises:
the video acquisition module is used for acquiring an AI face changing video set of the object to be restored;
the feature extraction module is used for extracting the common feature of the AI face changing video set and determining the facial feature of the object to be restored in a preset facial state based on the common feature;
and the image restoration module is used for restoring an original face image corresponding to the object to be restored in a preset face state based on the face features and a preset skull model.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an original face image intelligent restoration method based on an AI face changing video according to an embodiment of the present invention;
fig. 2 is a flowchart of another original face image intelligent restoration method based on an AI face changing video according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an original face image intelligent restoration system based on an AI face changing video according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the invention provides an AI face-changing video-based original face image intelligent restoration method, which comprises the following steps of referring to FIG. 1:
s1: acquiring an AI face changing video set of an object to be restored;
s2: extracting common features of the AI face changing video set, and determining facial features of the object to be restored in a preset facial state based on the common features;
s3: and restoring an original face image corresponding to the object to be restored in a preset face state based on the face features and a preset skull model.
In this embodiment, the object to be restored is an object that needs to be restored by using a large number of face-changing videos in the present invention, and is also a user who changes faces.
In this embodiment, the AI face change video set is a set formed by replacing the face of the object to be restored with the video of the other task face.
In this embodiment, the common characteristic is a characteristic that represents the common characteristic of the AI face change videos included in the AI face change video set.
In this embodiment, the facial feature is a feature of the object to be restored in a preset facial state determined based on the common feature of the AI face change video set.
In this embodiment, the preset facial state is a preset facial expression and a preset lighting condition corresponding to the reduction of the facial image of the object to be reduced.
In this embodiment, the preset skull model is a standard skull three-dimensional model prepared in advance.
In this embodiment, the original face image is a face image of the object to be restored in a preset face state, which is restored based on the face features and the preset skull model.
The beneficial effects of the above technology are: the method comprises the steps of extracting common features of a large number of AI face change videos of an object to be restored based on the AI face change videos, obtaining facial features of the object to be restored in a preset facial state, restoring an original face image corresponding to the object to be restored in the preset facial state based on the facial features, and reducing the crisis of information reality caused by the AI face change videos.
Example 2:
on the basis of embodiment 1, in the method for intelligently restoring the original face image based on the AI face-changing video, S1: acquiring an AI face-changing video set of an object to be restored, with reference to fig. 2, including:
s101: acquiring all face changing videos of an object to be restored;
s102: classifying all face change videos based on the genders of corresponding face change people in the face change videos to obtain a first face change video subset corresponding to each gender;
s103: classifying the first face changing video subset into a second face changing video subset corresponding to each face changing person based on the corresponding face changing person in the face changing video;
s104: and summarizing all the second face changing video subsets to obtain the AI face changing video set of the object to be restored.
In this embodiment, the face-changing video is a video obtained by synthesizing the facial features of the object to be restored and the facial videos of other people.
In this embodiment, the first face change video subset is a face change video set corresponding to each gender obtained by classifying all face change videos based on the genders of corresponding face change people in the face change videos.
In this embodiment, the second face changing video subset is a face changing video set that classifies the first face changing video subset into each face changing person based on the corresponding face changing person in the face changing video.
The beneficial effects of the above technology are: all face-changing videos of the object to be restored are classified based on gender and the face-changing person to obtain corresponding types of face-changing video combination, and a data basis is provided for carrying out different-angle common feature extraction on different types of face-changing videos and restoring the face features corresponding to the object to be restored.
Example 3:
on the basis of the embodiment 2, in the method for intelligently restoring the original face image based on the AI face-changing video, S2: extracting the common characteristics of the AI face changing video set, and determining the facial characteristics of the object to be restored under a preset facial state based on the common characteristics, wherein the method comprises the following steps:
extracting the common characteristics of the five sense organs corresponding to the AI face changing video set;
extracting facial muscle movement common characteristics corresponding to the AI face changing video set;
determining a facial muscle arch height set and a facial organ coordinate set of the object to be restored in a preset face state as corresponding face features based on the common features;
wherein the common features include: common features of five sense organs and common features of facial muscle movements.
In this embodiment, the feature of commonality between five sense organs is a feature of commonality between five sense organs of the face images included in the AI face-changing video set.
In this embodiment, the feature of the commonality of facial muscle movement is a feature of the commonality of facial muscle movement of the face image included in the AI face change video set.
In this embodiment, the set of facial muscle arch heights is a sequence formed by determining muscle arch heights representing different parts of the face of the object to be restored based on the common characteristics of facial muscle movements.
In this embodiment, the set of coordinates of the five sense organs is a set formed by coordinates corresponding to the contour of the five sense organs of the object to be reduced, which are determined based on the common characteristics of the five sense organs.
The beneficial effects of the above technology are: the facial muscle arch height set and the facial muscle movement commonality feature of the object to be restored are determined through the facial muscle movement commonality feature and the facial muscle commonality feature of the object to be restored, and an important basis is further provided for restoring the facial image of the object to be restored.
Example 4:
on the basis of embodiment 3, the method for intelligently restoring an original face image based on an AI face changing video extracts a feature commonality corresponding to the AI face changing video set, and includes:
extracting a first facial feature image contained in a first facial feature changing video subset corresponding to each gender contained in the AI facial feature changing video set, determining a first facial feature coordinate subset corresponding to the first facial feature image, obtaining a first facial feature coordinate set corresponding to each gender based on all the first facial feature coordinate subsets, and performing common feature extraction on the first facial feature coordinate set to obtain a first facial feature common feature corresponding to each gender;
extracting second facial features images contained in a second facial changing video subset corresponding to each facial changing person contained in the AI facial changing video set, determining a second facial feature coordinate subset corresponding to the second facial feature images, obtaining a second facial feature coordinate set corresponding to each facial changing person based on all the second facial feature coordinate subsets, and performing common feature extraction on the second facial feature coordinate set to obtain second facial feature common features corresponding to each facial changing person;
wherein the five-functional commonality features include: the first and second five-functional commonality characteristics.
In this embodiment, the first facial image is a facial image included in the first facial changing video subset corresponding to each gender included in the AI facial changing video set.
In this embodiment, the first facial feature coordinate subset is a coordinate set of facial feature outlines included in the first facial feature image.
In this embodiment, the first facial feature coordinate set is a coordinate set of facial feature contours corresponding to each gender obtained based on all the first facial feature coordinate subsets.
In this embodiment, the first feature commonality characteristic is a feature of commonality of features corresponding to each sex obtained after extracting the feature commonality of the first feature coordinate set.
In this embodiment, the second facial feature image is a facial feature image included in the second face-changing video subset corresponding to each face-changing person included in the AI face-changing video set.
In this embodiment, the second facial features coordinate subset is a coordinate set of facial features contours included in the second facial features image.
In this embodiment, the second facial feature coordinate set is a coordinate set of facial feature outlines corresponding to each of the face-changing persons obtained based on all the second facial feature coordinate subsets.
In this embodiment, the second facial feature commonality feature extracts the commonality feature of the facial feature corresponding to each of the face-changing persons obtained after the second facial feature coordinate set is subjected to commonality feature extraction.
The beneficial effects of the above technology are: performing common feature extraction on a first face changing video subset corresponding to each gender contained in the AI face changing video set to obtain a first five-feature common feature corresponding to each gender; meanwhile, the common feature extraction is carried out on the second face changing video sub-set corresponding to each face changing person contained in the AI face changing video set, so that the common feature of the second facial features corresponding to each face changing person is obtained, and a data basis is provided for the subsequent reduction of the facial feature coordinate set corresponding to the object to be reduced.
Example 5:
on the basis of embodiment 4, the method for intelligently restoring an original face image based on an AI face change video extracts a facial muscle movement commonality feature corresponding to the AI face change video set, and includes:
extracting a first facial muscle movement common characteristic corresponding to the gender and the expression from the AI face changing video set;
extracting second facial muscle motion common characteristics corresponding to the expressions corresponding to the face-changing persons from the AI face-changing video set;
wherein the facial muscle movement commonality features include: the first facial muscle motion commonality characteristic and the second facial muscle motion commonality characteristic.
In this embodiment, the first facial muscle motion commonality feature is a facial muscle motion commonality feature extracted from the AI face change video set and corresponding to the gender-corresponding expression.
In this embodiment, the second facial muscle motion commonality feature is a facial muscle motion commonality feature extracted from the AI face-changing video set and corresponding to the expression of the face-changing person.
The beneficial effects of the above technology are: by extracting the first facial muscle motion common characteristics corresponding to the gender and the expression corresponding to the face-changing person from the AI face-changing video set, an important basis is provided for accurately restoring the facial arch height set corresponding to the object to be restored subsequently.
Example 6:
on the basis of embodiment 5, the method for intelligently restoring an original face image based on an AI face-changing video extracts a first facial muscle movement commonality feature corresponding to a gender and an expression from the AI face-changing video set, and includes:
determining a first face changing video contained in a first face changing video subset corresponding to each gender contained in the AI face changing video set;
performing expression analysis on the first face changing video to obtain a first face changing video segment set corresponding to each expression of each gender;
determining a first face changing image contained in the first face changing video segment set;
determining each local muscle region contained in the first face-changed image based on a standard facial muscle profile;
and dynamically analyzing each local muscle area contained in the first face changing video segment set to obtain a first facial muscle motion common characteristic corresponding to the gender and the expression.
In this embodiment, the first face change video is a face change video included in a first face change video subset corresponding to each gender included in the AI face change video set.
In this embodiment, the first face-changing video segment set is a set formed by face-changing video segments corresponding to each expression of each gender obtained after performing expression analysis on the first face-changing video.
In this embodiment, the first face-changed image is a face-changed image included in the first face-changed video segment set.
In this embodiment, the distribution of the normal facial expression muscles is shown schematically.
In this embodiment, the local muscle region is an image region corresponding to each of the local muscles included in the first face change image determined based on the standard facial muscle profile.
The beneficial effects of the above technology are: the first face-changing videos contained in the first face-changing video subset corresponding to each gender are analyzed according to the expressions, so that the first facial muscle motion commonality characteristics corresponding to the expressions corresponding to the gender are obtained, and an important basis is provided for accurately reducing the facial arch height set corresponding to the object to be reduced subsequently.
Example 7:
on the basis of embodiment 6, the method for intelligently restoring an original face image based on an AI face-changing video dynamically analyzes each local muscle region included in a first face-changing video segment set to obtain a first facial muscle motion commonality characteristic corresponding to a gender-corresponding expression, and includes:
extracting a dynamic video segment corresponding to each local muscle region from the first face changing video segment set;
determining a preset Hessian matrix corresponding to each preset characteristic point in a preset characteristic point set corresponding to the local muscle area;
determining a first Hessian matrix corresponding to each first coordinate point contained in a local muscle area corresponding to each first video frame in the dynamic video segment;
calculating a corresponding similarity value between the preset Hessian matrix and each first Hessian matrix;
taking a second coordinate point corresponding to the first Hessian matrix corresponding to the minimum similarity value as a corresponding first characteristic point contained in a corresponding local muscle area corresponding to the corresponding preset characteristic point;
determining all first characteristic points contained in the local muscle area to obtain a corresponding first characteristic point set;
carrying out binarization processing on each second video frame in the first face changing video to obtain a corresponding third video frame;
performing light and shadow feature extraction on the third video frame to obtain a corresponding first light and shadow feature;
determining a light and shadow conversion matrix between the first light and shadow feature and a corresponding preset light and shadow feature under a preset light and shadow condition;
converting the corresponding local muscle area in the corresponding first video frame into a corresponding standard image under a preset light and shadow condition based on the light and shadow conversion matrix;
dividing a local muscle region contained in the first video frame into a plurality of shadow analysis regions based on the first set of feature points;
sequencing the light and shadow analysis regions according to the frame sequence corresponding to the dynamic video band to obtain a corresponding light and shadow analysis region change sequence;
converting the light and shadow analysis region change sequence into a corresponding local muscle arch height sequence;
and summarizing the local muscle arch height sequence to obtain a first facial muscle movement commonality characteristic corresponding to the gender-corresponding expression.
In this embodiment, the dynamic video segment is a video segment corresponding to each local muscle region extracted from the first face-changing video segment set.
In this embodiment, the preset hessian matrix is a preset hessian matrix representing local curvature of the local muscle at the feature point, corresponding to each preset feature point in the local muscle region.
In this embodiment, the preset feature point set is a set of preset feature points included in the local muscle region.
In this embodiment, the predetermined feature points are preset bone points in the corresponding local muscle regions.
In this embodiment, the first video frame is a video frame included in the dynamic video segment.
In this embodiment, the first coordinate point is a coordinate point included in the local muscle region.
In this embodiment, the first hessian matrix is a hessian matrix corresponding to each first coordinate point included in the local muscle area corresponding to each first video frame in the dynamic video segment and representing a local curvature at the corresponding local muscle area and corresponding to the first coordinate point.
In this embodiment, the first feature point is a feature point corresponding to the preset feature point in the local muscle region, and is also a second coordinate point corresponding to the first hessian matrix corresponding to the preset feature point when the similarity value of the preset hessian matrix corresponding to the preset feature point is minimum.
In this embodiment, the similarity value is a value representing the similarity between the first hessian matrix and the corresponding preset hessian matrix, and a larger similarity value represents a higher similarity between the first hessian matrix and the corresponding preset hessian matrix, and vice versa.
In this embodiment, calculating a corresponding similarity value between the preset hessian matrix and each first hessian matrix includes:
Figure 123635DEST_PATH_IMAGE001
Figure 669892DEST_PATH_IMAGE002
in the formula (I), the compound is shown in the specification,
Figure 762613DEST_PATH_IMAGE003
for presetting Hessian matrix and second
Figure 766341DEST_PATH_IMAGE004
Corresponding difference values between the first hessian matrices,
Figure 286315DEST_PATH_IMAGE005
in order to pre-set the hessian matrix,
Figure 742704DEST_PATH_IMAGE004
is the current number of the first Hessian matrix and
Figure 373537DEST_PATH_IMAGE004
has a value range of [1, m](m is the total number of first coordinate points contained in the local muscle region),
Figure 966192DEST_PATH_IMAGE006
is as follows
Figure 155603DEST_PATH_IMAGE004
A first one of the Hessian matrices having a first Hessian matrix,
Figure 364868DEST_PATH_IMAGE007
for presetting Hessian matrix or the second Hessian matrix in the first Hessian matrix
Figure 330549DEST_PATH_IMAGE007
The number of the individual values is,
Figure 184236DEST_PATH_IMAGE008
is the total number of numerical values contained in the preset Hessian matrix or the first Hessian matrix,
Figure 46013DEST_PATH_IMAGE009
for presetting the second in Hessian matrix
Figure 116475DEST_PATH_IMAGE007
The number of the individual values is,
Figure 479323DEST_PATH_IMAGE010
is a first
Figure 718674DEST_PATH_IMAGE004
A first Hessian matrix
Figure 485773DEST_PATH_IMAGE007
The number of the individual values is,
Figure 544996DEST_PATH_IMAGE011
for presetting Hessian matrix and
Figure 445956DEST_PATH_IMAGE004
corresponding similarity values between the first Hessian matrices;
for example, in the case of a liquid,
Figure 772770DEST_PATH_IMAGE005
is composed of
Figure DEST_PATH_IMAGE013A
Figure 943726DEST_PATH_IMAGE006
Is composed of
Figure DEST_PATH_IMAGE015A
Then, then
Figure 598567DEST_PATH_IMAGE003
The content of the organic acid is 0.7,
Figure 444163DEST_PATH_IMAGE011
is 0.3.
In this embodiment, the second coordinate point is a first coordinate point corresponding to the first hessian matrix corresponding to the preset feature point when the similarity value of the preset hessian matrix corresponding to the preset feature point is minimum.
In this embodiment, the first feature point set is a set of all the first feature points included in the local muscle region.
In this embodiment, the third video frame is a video frame obtained by performing binarization processing on the second video frame.
In this embodiment, the first light and shadow feature is a feature representing a light and shadow parameter of the third video frame, which is obtained after the light and shadow feature extraction is performed on the third video frame.
In this embodiment, the predetermined light and shadow feature is a light and shadow feature corresponding to the predetermined light and shadow condition.
In this embodiment, the light and shadow conversion matrix is a conversion matrix between the first light and shadow feature and a corresponding predetermined light and shadow feature under a predetermined light and shadow condition.
In this embodiment, the preset light and shadow condition is a light and shadow condition corresponding to the preset reduction pattern, and is specifically set according to a reduction requirement.
In this embodiment, the standard image is an image corresponding to the local muscle area in the first video frame under the predetermined lighting condition based on the lighting transformation matrix.
In this embodiment, the light and shadow analysis region is a plurality of regions into which the local muscle region included in the first video frame is divided based on the first feature point set.
In this embodiment, the light and shadow analysis region change sequence is a sequence representing a change of a corresponding light and shadow analysis region obtained by sorting the light and shadow analysis region according to a frame sequence corresponding to the dynamic video segment.
In this embodiment, converting the light and shadow analysis region variation sequence into a corresponding local muscle arch height sequence is: and determining a sequence representing the change of the local muscle arch height corresponding to each light and shadow analysis region based on a relation function (specifically set according to the imaging parameters of the face-changing video) between the light and shadow parameters (such as contrast) corresponding to each light and shadow analysis region and the muscle arch height contained in the light and shadow analysis region change sequence.
The beneficial effects of the above technology are: the method comprises the steps of accurately marking characteristic points contained in a local muscle area based on a Hessian matrix, meanwhile, carrying out light and shadow characteristic standardization on the local muscle area, marking out a corresponding light and shadow analysis area based on the standardized local muscle area and the determined characteristic points, determining the arch height change of muscles in the corresponding local area according to a change sequence of the corresponding light and shadow analysis area in a corresponding frame sequence, and further predicting the local muscle thickness of an object to be restored, so that an important basis is provided for restoring an original face image of the object to be restored.
Example 8:
on the basis of embodiment 7, the method for intelligently restoring an original face image based on an AI face-changing video extracts a second facial muscle motion commonality characteristic corresponding to a corresponding expression of a face-changing person from the AI face-changing video set, and includes:
determining second face changing videos contained in a second face changing video sub-set corresponding to each face changing person contained in the AI face changing video set;
performing expression analysis on the second face changing video to obtain a second face changing video segment set corresponding to each expression of each face changing person;
determining a second face changing image contained in the second face changing video segment set;
determining each local muscle region contained in the second face-changed image based on a standard facial muscle profile;
and dynamically analyzing each local muscle area contained in the second face changing video segment set to obtain a second facial muscle motion common characteristic corresponding to the corresponding expression of the face changing person.
In this embodiment, the second face change video is a face change video included in a second face change video subset corresponding to each face change person included in the AI face change video set.
In this embodiment, the second face changing video segment set is a set formed by face changing video segments corresponding to each expression of each face changing person obtained by performing expression analysis on the second face changing video.
In this embodiment, the second face-changed image is a face-changed image included in the second face-changed video segment set.
The beneficial effects of the above technology are: the second face-changing videos contained in the second face-changing video sub-set corresponding to each face-changing person are analyzed according to the expressions, so that second face muscle motion common characteristics corresponding to the expressions corresponding to the face-changing persons are obtained, and an important basis is provided for subsequently and accurately restoring a face arch height set corresponding to an object to be restored.
Example 9:
on the basis of embodiment 8, the method for intelligently restoring an original face image based on an AI face changing video, which determines, based on the common features, a facial muscle arch height set and a facial feature coordinate set of the object to be restored in a preset face state as corresponding facial features, includes:
performing difference comparison on the common characteristic of the first five sense organs and the characteristic of the first standard five sense organs corresponding to the corresponding gender to obtain the characteristic deviation of the first five sense organs corresponding to the corresponding gender;
performing difference comparison on the first facial muscle motion common characteristics and first standard facial muscle motion characteristics corresponding to the gender-corresponding expression to obtain first facial muscle motion characteristic deviation corresponding to the gender-corresponding expression;
summarizing the first facial muscle movement characteristic deviation corresponding to each gender and all the corresponding first facial muscle movement characteristic deviations to obtain a first total deviation corresponding to each gender;
taking the sex corresponding to the first total deviation as the restoring sex corresponding to the object to be restored;
determining a first face-changing person corresponding to the gender reduction;
performing difference comparison on a second facial feature common characteristic corresponding to the first face-changing person in a preset face state and a second standard facial feature corresponding to the first face-changing person in the preset face state to obtain a second facial feature deviation corresponding to the first face-changing person in the preset face state;
carrying out common feature extraction on second facial feature deviations corresponding to all first face changing persons to obtain corresponding common features of the facial feature deviations of the object to be restored in a preset face state;
determining a coordinate set of the five sense organs corresponding to the object to be reduced based on the common features of the deviation of the five sense organs and the second standard features of the five sense organs;
comparing a difference between a second face muscle motion common characteristic corresponding to the first face-changing person in a preset face state and a second standard face muscle motion characteristic corresponding to the first face-changing person in the preset face state to obtain a first face muscle motion characteristic deviation corresponding to the first face-changing person in the preset face state;
performing common feature extraction on the first face muscle movement feature deviations corresponding to all the first face changing persons to obtain the common feature of the face muscle movement deviation corresponding to the object to be restored in a preset face state;
determining a facial muscle arch height set corresponding to the object to be restored based on the facial muscle movement deviation commonality characteristics and the second standard facial muscle movement characteristics;
and taking the facial muscle arch height set and the facial organ coordinate set of the object to be restored in a preset facial state as corresponding facial features.
In this embodiment, the first feature deviation of the fifth sense organ is the feature deviation of the fifth sense organ corresponding to the corresponding gender obtained by performing a difference comparison between the common feature of the first fifth sense organ and the first standard feature of the fifth sense organ corresponding to the corresponding gender.
In this embodiment, the first standard feature of five sense organs is the feature of five sense organs corresponding to the sex-related standard.
In this embodiment, the first facial muscle movement feature deviation is a facial muscle movement feature deviation corresponding to the gender-corresponding expression obtained by performing difference comparison on the first facial muscle movement common feature and the first standard facial muscle movement feature corresponding to the gender-corresponding expression.
In this embodiment, the first standard facial muscle movement feature is a standard facial muscle movement feature corresponding to a gender-corresponding expression.
In this embodiment, the first total deviation is a total deviation corresponding to each gender, which is obtained by summing the first facial muscle movement characteristic deviation corresponding to each gender and the corresponding first facial muscle movement characteristic deviation.
In this embodiment, the reduced gender is the gender corresponding to the first total deviation.
In this embodiment, the first face-changing person is a face-changing person corresponding to the restoring gender in the face-changing video set.
In this embodiment, the second facial feature deviation is a facial feature deviation corresponding to the first facial feature changed person in the preset facial state, which is obtained by comparing a second facial feature common to the first facial feature changed person in the preset facial state with a second standard facial feature corresponding to the first facial feature changed person in the preset facial state.
In this embodiment, the second common feature of five sense organs is the common feature of five sense organs corresponding to the first face-changing person in the predetermined face state.
In this embodiment, the second standard facial feature is a standard facial feature corresponding to the first face-changing person in a preset face state.
In this embodiment, the feature of commonality of deviation of five sense organs is a feature representing commonality of deviation of five sense organs corresponding to the object to be restored in a preset face state, which is obtained after extracting the feature of commonality of the second five sense organs corresponding to all the first face-changed persons.
In this embodiment, the facial features coordinate set is a set formed by facial features contour coordinates corresponding to the object to be reduced, which are determined based on the deviation commonality characteristics of the facial features and the second standard facial features.
In this embodiment, the deviation of the first facial muscle movement characteristic is a deviation of a facial muscle movement characteristic corresponding to the first face-changed person in the preset facial state, which is obtained by performing difference comparison between a second facial muscle movement commonality characteristic corresponding to the first face-changed person in the preset facial state and a second standard facial muscle movement characteristic corresponding to the first face-changed person in the preset facial state.
In this embodiment, the second facial muscle movement commonality characteristic is a facial muscle movement commonality characteristic corresponding to the first face-changing person in a preset facial state.
In this embodiment, the second standard facial muscle movement characteristic is a standard facial muscle movement characteristic corresponding to the first face-changing person in a preset facial state.
In this embodiment, the facial muscle movement deviation commonality characteristic is a characteristic representing facial muscle movement deviation commonality corresponding to the object to be restored in a preset facial state, which is obtained after the commonality characteristic extraction is performed on the first facial muscle movement characteristic deviations corresponding to all the first face-changing persons.
In this embodiment, the facial muscle arch height set is a set formed by all facial muscle arch heights corresponding to the object to be restored, which are determined based on the facial muscle movement deviation commonality characteristic and the second standard facial muscle movement characteristic.
The beneficial effects of the above technology are: the method comprises the steps of determining the reduction gender of an object to be reduced by carrying out difference comparison on common characteristics corresponding to each gender and common characteristics, carrying out difference comparison on the common characteristics corresponding to the reduction gender and standard common characteristics corresponding to a first face-changing person in a preset face state to obtain corresponding deviation common characteristics, combining the deviation common characteristics and the standard common characteristics corresponding to the preset face state to reduce a facial muscle arch height set and a facial organ coordinate set of the object to be reduced in the preset face state, further obtaining the facial characteristics corresponding to the object to be reduced, and providing a decisive basis for reducing an original face image corresponding to the object to be reduced in the preset face state.
Example 10:
the invention provides an AI face-changing video-based original face image intelligent restoration system, which refers to fig. 3 and comprises the following components:
the video acquisition module is used for acquiring an AI face changing video set of the object to be restored;
the feature extraction module is used for extracting the common features of the AI face changing video set and determining the facial features of the object to be restored in a preset facial state based on the common features;
and the image restoration module is used for restoring an original face image corresponding to the object to be restored in a preset face state based on the face features and a preset skull model.
The beneficial effects of the above technology are: the face feature of the object to be restored in the preset face state is obtained based on the common feature extraction of a large number of AI face change videos of the object to be restored, the original face image corresponding to the object to be restored in the preset face state is restored based on the face feature, and the information authenticity crisis caused by the AI face change videos is reduced.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (2)

1. An AI face changing video-based original face image intelligent restoration method is characterized by comprising the following steps:
s1: acquiring an AI face changing video set of an object to be restored;
s2: extracting common features of the AI face changing video set, and determining facial features of the object to be restored in a preset facial state based on the common features;
s3: restoring an original face image corresponding to the object to be restored in a preset face state based on the face features and a preset skull model;
s1: the method for acquiring the AI face-changing video set of the object to be restored comprises the following steps:
s101: acquiring all face changing videos of an object to be restored;
s102: classifying all face change videos based on the genders of corresponding face change people in the face change videos to obtain a first face change video subset corresponding to each gender;
s103: classifying the first face changing video subset into a second face changing video subset corresponding to each face changing person based on the corresponding face changing person in the face changing video;
s104: summarizing all second face changing video subsets to obtain an AI face changing video set of the object to be restored;
s2: extracting the common characteristics of the AI face changing video set, and determining the facial characteristics of the object to be restored under a preset facial state based on the common characteristics, wherein the method comprises the following steps:
extracting the common characteristics of the five sense organs corresponding to the AI face changing video set;
extracting facial muscle motion common characteristics corresponding to the AI face changing video set;
determining a facial muscle arch height set and a facial organ coordinate set of the object to be restored in a preset face state as corresponding face features based on the common features;
wherein the common characteristic comprises: the common characteristics of five sense organs and facial muscle movement;
extracting the common characteristics of the five sense organs corresponding to the AI face changing video set, wherein the common characteristics comprise:
extracting a first facial feature image contained in a first facial feature changing video subset corresponding to each gender contained in the AI facial feature changing video set, determining a first facial feature coordinate subset corresponding to the first facial feature image, obtaining a first facial feature coordinate set corresponding to each gender based on all the first facial feature coordinate subsets, and performing common feature extraction on the first facial feature coordinate set to obtain a first facial feature common feature corresponding to each gender;
extracting second facial features images contained in a second facial feature changing video subset corresponding to each face changing person contained in the AI facial feature changing video set, determining second facial feature coordinate subsets corresponding to the second facial feature images, obtaining second facial feature coordinate sets corresponding to each face changing person based on all the second facial feature coordinate subsets, and performing common feature extraction on the second facial feature coordinate sets to obtain second facial feature common features corresponding to each face changing person;
wherein the five-functional commonality features include: the first and second five-functional commonality characteristics;
extracting facial muscle movement common characteristics corresponding to the AI face changing video set, including:
extracting a first facial muscle movement common characteristic corresponding to the gender and the expression from the AI face changing video set;
extracting second facial muscle motion common characteristics corresponding to the expressions corresponding to the face-changing persons from the AI face-changing video set;
wherein the facial muscle movement commonality characteristics comprise: the first facial muscle motion commonality characteristic and the second facial muscle motion commonality characteristic;
extracting a first facial muscle movement commonality characteristic corresponding to the gender and expression from the AI face-changing video set, comprising:
determining a first face changing video contained in a first face changing video subset corresponding to each gender contained in the AI face changing video set;
performing expression analysis on the first face changing video to obtain a first face changing video segment set corresponding to each expression of each gender;
determining a first face changing image contained in the first face changing video segment set;
determining each local muscle region contained in the first face-changed image based on a standard facial muscle profile;
dynamically analyzing each local muscle area contained in the first face changing video segment set to obtain a first facial muscle motion common characteristic corresponding to gender and expression;
dynamically analyzing each local muscle area contained in the first face changing video segment set to obtain a first facial muscle motion commonality characteristic corresponding to gender and expression, comprising:
extracting a dynamic video segment corresponding to each local muscle region from the first face changing video segment set;
determining a preset Hessian matrix corresponding to each preset characteristic point in a preset characteristic point set corresponding to the local muscle area;
determining a first Hessian matrix corresponding to each first coordinate point contained in a local muscle area corresponding to each first video frame in the dynamic video segment;
calculating a corresponding similarity value between the preset Hessian matrix and each first Hessian matrix;
taking a second coordinate point corresponding to the first Hessian matrix corresponding to the minimum similarity value as a corresponding first characteristic point contained in a corresponding local muscle area corresponding to the corresponding preset characteristic point;
determining all first characteristic points contained in the local muscle area to obtain a corresponding first characteristic point set;
carrying out binarization processing on each second video frame in the first face changing video to obtain a corresponding third video frame;
performing light and shadow feature extraction on the third video frame to obtain a corresponding first light and shadow feature;
determining a light and shadow conversion matrix between the first light and shadow feature and a corresponding preset light and shadow feature under a preset light and shadow condition;
converting the corresponding local muscle area in the corresponding first video frame into a corresponding standard image under a preset light and shadow condition based on the light and shadow conversion matrix;
dividing a local muscle region contained in the first video frame into a plurality of shadow analysis regions based on the first set of feature points;
sequencing the light and shadow analysis regions according to the frame sequence corresponding to the dynamic video segment to obtain a corresponding light and shadow analysis region change sequence;
converting the light and shadow analysis region change sequence into a corresponding local muscle arch height sequence;
summarizing the local muscle arch height sequence to obtain a first facial muscle movement commonality characteristic corresponding to the gender-corresponding expression;
extracting second facial muscle motion common characteristics corresponding to expressions corresponding to face-changing persons from the AI face-changing video set, wherein the second facial muscle motion common characteristics comprise:
determining second face changing videos contained in a second face changing video sub-set corresponding to each face changing person contained in the AI face changing video set;
performing expression analysis on the second face changing video to obtain a second face changing video segment set corresponding to each expression of each face changing person;
determining a second face changing image contained in the second face changing video segment set;
determining each local muscle region contained in the second face-changed image based on a standard facial muscle profile;
dynamically analyzing each local muscle area contained in the second face changing video segment set to obtain a second facial muscle motion common characteristic corresponding to the expression of the face changing person;
determining a facial muscle arch height set and a facial organ coordinate set of the object to be restored in a preset face state as corresponding facial features based on the common features, wherein the facial feature comprises the following steps:
performing difference comparison on the first common feature of the fifth sense organ and the first standard feature of the fifth sense organ corresponding to the corresponding gender to obtain the feature deviation of the first fifth sense organ corresponding to the corresponding gender;
performing difference comparison on the first facial muscle motion common characteristics and first standard facial muscle motion characteristics corresponding to the gender-corresponding expression to obtain first facial muscle motion characteristic deviation corresponding to the gender-corresponding expression;
summarizing the first facial muscle movement characteristic deviation corresponding to each gender and all the corresponding first facial muscle movement characteristic deviations to obtain a first total deviation corresponding to each gender;
taking the sex corresponding to the first total deviation as the restoring sex corresponding to the object to be restored;
determining a first face-changing person corresponding to the gender reduction;
performing difference comparison on a second facial feature common characteristic corresponding to the first face-changing person in a preset face state and a second standard facial feature corresponding to the first face-changing person in the preset face state to obtain a second facial feature deviation corresponding to the first face-changing person in the preset face state;
performing common feature extraction on second facial feature deviations corresponding to all first face changing persons to obtain corresponding common features of the facial feature deviations of the object to be restored in a preset face state;
determining a coordinate set of the five sense organs corresponding to the object to be reduced based on the common features of the deviation of the five sense organs and the second standard features of the five sense organs;
comparing a difference between a second face muscle motion common characteristic corresponding to the first face-changing person in a preset face state and a second standard face muscle motion characteristic corresponding to the first face-changing person in the preset face state to obtain a first face muscle motion characteristic deviation corresponding to the first face-changing person in the preset face state;
performing common feature extraction on the first face muscle movement feature deviations corresponding to all the first face changing persons to obtain the common feature of the face muscle movement deviation corresponding to the object to be restored in a preset face state;
determining a facial muscle arch height set corresponding to the object to be restored based on the facial muscle movement deviation commonality characteristics and the second standard facial muscle movement characteristics;
taking a facial muscle arch height set and a facial organ coordinate set of the object to be restored in a preset facial state as corresponding facial features;
the preset facial state is a preset facial expression and a preset light and shadow condition corresponding to the reduction of the facial image of the object to be reduced;
the local muscle arch height sequence is a sequence representing the local muscle arch height change corresponding to the shadow analysis area;
the dynamic video segment comprises a plurality of first video frames;
the first face changing video comprises a plurality of second video frames.
2. The utility model provides an original face image intelligence system of restoreing based on AI face changing video which characterized in that includes:
the video acquisition module is used for acquiring an AI face changing video set of the object to be restored;
the feature extraction module is used for extracting the common feature of the AI face changing video set and determining the facial feature of the object to be restored in a preset facial state based on the common feature;
the image restoration module is used for restoring an original face image corresponding to the object to be restored in a preset face state based on the face features and a preset skull model;
the video acquisition module comprises:
s101: acquiring all face changing videos of an object to be restored;
s102: classifying all face-changing videos based on the gender of the corresponding face-changing person in the face-changing videos to obtain a first face-changing video subset corresponding to each gender;
s103: classifying the first face changing video subset into a second face changing video subset corresponding to each face changing person based on the corresponding face changing person in the face changing video;
s104: summarizing all second face changing video subsets to obtain an AI face changing video set of the object to be restored;
the feature extraction module comprises:
extracting the common characteristics of the five sense organs corresponding to the AI face changing video set;
extracting facial muscle movement common characteristics corresponding to the AI face changing video set;
determining a facial muscle arch height set and a facial organ coordinate set of the object to be restored in a preset face state as corresponding face features based on the common features;
wherein the common characteristic comprises: common characteristics of the five sense organs and facial muscle movements;
extracting the common characteristics of the five sense organs corresponding to the AI face changing video set, wherein the common characteristics comprise:
extracting a first facial feature image contained in a first facial feature changing video subset corresponding to each gender contained in the AI facial feature changing video set, determining a first facial feature coordinate subset corresponding to the first facial feature image, obtaining a first facial feature coordinate set corresponding to each gender based on all the first facial feature coordinate subsets, and performing common feature extraction on the first facial feature coordinate set to obtain a first facial feature common feature corresponding to each gender;
extracting second facial features images contained in a second facial changing video subset corresponding to each facial changing person contained in the AI facial changing video set, determining a second facial feature coordinate subset corresponding to the second facial feature images, obtaining a second facial feature coordinate set corresponding to each facial changing person based on all the second facial feature coordinate subsets, and performing common feature extraction on the second facial feature coordinate set to obtain second facial feature common features corresponding to each facial changing person;
wherein the five-functional commonality features include: the first and second five-functional commonality characteristics;
extracting facial muscle movement common characteristics corresponding to the AI face changing video set, including:
extracting a first facial muscle movement common characteristic corresponding to the gender and the expression from the AI face changing video set;
extracting second facial muscle motion common characteristics corresponding to the expressions corresponding to the face-changing persons from the AI face-changing video set;
wherein the facial muscle movement commonality features include: the first and second facial muscle motion commonality characteristics;
extracting a first facial muscle movement commonality characteristic corresponding to the gender and expression from the AI face-changing video set, comprising:
determining a first face changing video contained in a first face changing video subset corresponding to each gender contained in the AI face changing video set;
performing expression analysis on the first face changing video to obtain a first face changing video segment set corresponding to each expression of each gender;
determining a first face changing image contained in the first face changing video segment set;
determining each local muscle region contained in the first face-changed image based on a standard facial muscle profile;
dynamically analyzing each local muscle area contained in the first face changing video segment set to obtain a first facial muscle motion common characteristic corresponding to gender and expression;
dynamically analyzing each local muscle area contained in the first face changing video segment set to obtain a first facial muscle motion commonality characteristic corresponding to gender and expression, comprising:
extracting a dynamic video segment corresponding to each local muscle region from the first face changing video segment set;
determining a preset Hessian matrix corresponding to each preset characteristic point in a preset characteristic point set corresponding to the local muscle area;
determining a first Hessian matrix corresponding to each first coordinate point contained in a local muscle area corresponding to each first video frame in the dynamic video segment;
calculating a corresponding similarity value between the preset Hessian matrix and each first Hessian matrix;
taking a second coordinate point corresponding to the first Hessian matrix corresponding to the minimum similarity value as a corresponding first characteristic point contained in a corresponding local muscle area corresponding to the corresponding preset characteristic point;
determining all first characteristic points contained in the local muscle area to obtain a corresponding first characteristic point set;
carrying out binarization processing on each second video frame in the first face changing video to obtain a corresponding third video frame;
performing light and shadow feature extraction on the third video frame to obtain corresponding first light and shadow features;
determining a light and shadow conversion matrix between the first light and shadow feature and a corresponding preset light and shadow feature under a preset light and shadow condition;
converting the corresponding local muscle area in the corresponding first video frame into a corresponding standard image under a preset light and shadow condition based on the light and shadow conversion matrix;
dividing a local muscle region contained in the first video frame into a plurality of shadow analysis regions based on the first set of feature points;
sequencing the light and shadow analysis regions according to the frame sequence corresponding to the dynamic video segment to obtain a corresponding light and shadow analysis region change sequence;
converting the light and shadow analysis region change sequence into a corresponding local muscle arch height sequence;
summarizing the local muscle arch height sequence to obtain a first facial muscle movement commonality characteristic corresponding to the gender-corresponding expression;
extracting second facial muscle motion common characteristics corresponding to expressions corresponding to face-changing persons from the AI face-changing video set, wherein the second facial muscle motion common characteristics comprise:
determining second face changing videos contained in a second face changing video sub-set corresponding to each face changing person contained in the AI face changing video set;
performing expression analysis on the second face changing video to obtain a second face changing video segment set corresponding to each expression of each face changing person;
determining a second face changing image contained in the second face changing video segment set;
determining each local muscle region contained in the second face-changed image based on a standard facial muscle profile;
dynamically analyzing each local muscle area contained in the second face changing video segment set to obtain a second facial muscle motion common characteristic corresponding to the expression of the face changing person;
determining a facial muscle arch height set and a facial organ coordinate set of the object to be restored in a preset facial state as corresponding facial features based on the common features, wherein the facial feature setting method comprises the following steps:
performing difference comparison on the common characteristic of the first five sense organs and the characteristic of the first standard five sense organs corresponding to the corresponding gender to obtain the characteristic deviation of the first five sense organs corresponding to the corresponding gender;
performing difference comparison on the first facial muscle motion common characteristics and first standard facial muscle motion characteristics corresponding to the gender-corresponding expression to obtain first facial muscle motion characteristic deviation corresponding to the gender-corresponding expression;
summarizing the first facial muscle movement characteristic deviation corresponding to each sex and all the corresponding first facial muscle movement characteristic deviations to obtain a first total deviation corresponding to each sex;
taking the sex corresponding to the first total deviation as the restoring sex corresponding to the object to be restored;
determining a first face-changing person corresponding to the gender reduction;
performing difference comparison on a second facial feature common characteristic corresponding to the first face-changing person in a preset face state and a second standard facial feature corresponding to the first face-changing person in the preset face state to obtain a second facial feature deviation corresponding to the first face-changing person in the preset face state;
carrying out common feature extraction on second facial feature deviations corresponding to all first face changing persons to obtain corresponding common features of the facial feature deviations of the object to be restored in a preset face state;
determining a coordinate set of the five sense organs corresponding to the object to be reduced based on the common features of the deviation of the five sense organs and the second standard features of the five sense organs;
comparing a difference between a second face muscle motion common characteristic corresponding to the first face-changing person in a preset face state and a second standard face muscle motion characteristic corresponding to the first face-changing person in the preset face state to obtain a first face muscle motion characteristic deviation corresponding to the first face-changing person in the preset face state;
performing common feature extraction on the first face muscle motion feature deviations corresponding to all the first face changing people to obtain face muscle motion deviation common features corresponding to the object to be restored in a preset face state;
determining a facial muscle arch height set corresponding to the object to be restored based on the facial muscle movement deviation commonality characteristics and the second standard facial muscle movement characteristics;
taking a facial muscle arch height set and a facial organ coordinate set of the object to be restored in a preset facial state as corresponding facial features;
the preset facial state is a preset facial expression and a preset light and shadow condition corresponding to the reduction of the facial image of the object to be reduced;
the local muscle arch height sequence is a sequence representing the local muscle arch height change corresponding to the shadow analysis area;
the dynamic video segment comprises a plurality of first video frames;
the first face changing video comprises a plurality of second video frames.
CN202210320804.4A 2022-03-30 2022-03-30 AI face changing video-based original face image intelligent restoration method and system Active CN114494002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210320804.4A CN114494002B (en) 2022-03-30 2022-03-30 AI face changing video-based original face image intelligent restoration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210320804.4A CN114494002B (en) 2022-03-30 2022-03-30 AI face changing video-based original face image intelligent restoration method and system

Publications (2)

Publication Number Publication Date
CN114494002A CN114494002A (en) 2022-05-13
CN114494002B true CN114494002B (en) 2022-07-01

Family

ID=81488327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210320804.4A Active CN114494002B (en) 2022-03-30 2022-03-30 AI face changing video-based original face image intelligent restoration method and system

Country Status (1)

Country Link
CN (1) CN114494002B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541473A (en) * 2020-12-24 2021-03-23 华南理工大学 Face changing video detection method based on human face vector time-space domain features and application

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10796480B2 (en) * 2015-08-14 2020-10-06 Metail Limited Methods of generating personalized 3D head models or 3D body models
CN110533585B (en) * 2019-09-04 2022-09-27 广州方硅信息技术有限公司 Image face changing method, device, system, equipment and storage medium
US11687778B2 (en) * 2020-01-06 2023-06-27 The Research Foundation For The State University Of New York Fakecatcher: detection of synthetic portrait videos using biological signals
CN112767303B (en) * 2020-08-12 2023-11-28 腾讯科技(深圳)有限公司 Image detection method, device, equipment and computer readable storage medium
CN113011357B (en) * 2021-03-26 2023-04-25 西安电子科技大学 Depth fake face video positioning method based on space-time fusion
CN113111750A (en) * 2021-03-31 2021-07-13 智慧眼科技股份有限公司 Face living body detection method and device, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541473A (en) * 2020-12-24 2021-03-23 华南理工大学 Face changing video detection method based on human face vector time-space domain features and application

Also Published As

Publication number Publication date
CN114494002A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN110728255B (en) Image processing method, image processing device, electronic equipment and storage medium
CN105046219B (en) A kind of face identification system
CN110490212A (en) Molybdenum target image processing arrangement, method and apparatus
CN105825183B (en) Facial expression recognizing method based on partial occlusion image
CN113762138B (en) Identification method, device, computer equipment and storage medium for fake face pictures
WO2024109374A1 (en) Training method and apparatus for face swapping model, and device, storage medium and program product
CN111860400A (en) Face enhancement recognition method, device, equipment and storage medium
CN114758362A (en) Clothing changing pedestrian re-identification method based on semantic perception attention and visual masking
CN113570684A (en) Image processing method, image processing device, computer equipment and storage medium
CN113792635A (en) Gesture recognition method based on lightweight convolutional neural network
CN112330624A (en) Medical image processing method and device
CN111666813B (en) Subcutaneous sweat gland extraction method of three-dimensional convolutional neural network based on non-local information
CN113705469A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN111598144A (en) Training method and device of image recognition model
CN111401222A (en) Feature learning method for combined multi-attribute information of shielded face
CN114842240A (en) Method for classifying images of leaves of MobileNet V2 crops by fusing ghost module and attention mechanism
CN113012167B (en) Combined segmentation method for cell nucleus and cytoplasm
CN114494002B (en) AI face changing video-based original face image intelligent restoration method and system
CN113705301A (en) Image processing method and device
CN109359543B (en) Portrait retrieval method and device based on skeletonization
JP7385046B2 (en) Color spot prediction method, device, equipment and storage medium
CN114926458A (en) Method and device for generating infrared mask face image and face recognition system
CN114648800A (en) Face image detection model training method, face image detection method and device
CN114049676A (en) Fatigue state detection method, device, equipment and storage medium
CN109948445B (en) Action classification method and classification system under complex background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240402

Address after: Room 2401-2404, 24th Floor, Building 2, Yunkun Building, No. 8 Chuangye Road, Kunshan Development Zone, Suzhou City, Jiangsu Province, 215000

Patentee after: Silicon based (Kunshan) Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: 510000 room 909d, Jiayue building, No. 38, Zhongshan Avenue, Tianhe District, Guangzhou, Guangdong

Patentee before: Guangzhou Gongping Technology Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right