CN112070662A - Evaluation method and device of face changing model, electronic equipment and storage medium - Google Patents

Evaluation method and device of face changing model, electronic equipment and storage medium Download PDF

Info

Publication number
CN112070662A
CN112070662A CN202011259430.7A CN202011259430A CN112070662A CN 112070662 A CN112070662 A CN 112070662A CN 202011259430 A CN202011259430 A CN 202011259430A CN 112070662 A CN112070662 A CN 112070662A
Authority
CN
China
Prior art keywords
face
image
changed
changing
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011259430.7A
Other languages
Chinese (zh)
Other versions
CN112070662B (en
Inventor
白云志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011259430.7A priority Critical patent/CN112070662B/en
Publication of CN112070662A publication Critical patent/CN112070662A/en
Application granted granted Critical
Publication of CN112070662B publication Critical patent/CN112070662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to an evaluation method and device for a face changing model, an electronic device and a storage medium. The method comprises the following steps: acquiring a binary image group and a ternary image group; determining a first face-changing image corresponding to a first person image and a second person image in the binary image group and a second face-changing image corresponding to the first face-changing image and a person image to be face-changed according to the face-changing model; calculating a first error according to the second face-changed image and the face-changed person image; determining a third face-changing image corresponding to a third person image and a fourth person image in the ternary image group, a fourth face-changing image corresponding to the third face-changing image and a fifth person image in the ternary image group, and a fifth face-changing image corresponding to the target person image and the fifth person image according to the face-changing model; calculating a second error according to the fourth face-changed image and the fifth face-changed image; and determining an evaluation result according to the first error and the second error. The face changing model evaluation method and device achieve evaluation of the face changing model, and evaluation accuracy is high.

Description

Evaluation method and device of face changing model, electronic equipment and storage medium
Technical Field
The embodiment of the disclosure relates to computer technologies, and in particular, to a method and an apparatus for evaluating a face-changing model, an electronic device, and a storage medium.
Background
With the continuous development of deep learning technology, AI (Artificial Intelligence) face changing has received extensive attention from both academic and industrial fields.
In the related art, the face-changing effect of the face-changing model is mainly evaluated by means of manual detection, for example, consistency between a face-changing image generated by the face-changing model and an input original person image, and authenticity of the generated face-changing image are detected manually. The evaluation accuracy of the face changing model is low, the evaluation is greatly influenced by human factors, and the evaluation of the face changing model through a large-scale figure image is difficult to realize.
Disclosure of Invention
The present disclosure provides an evaluation method and apparatus for a face-changing model, an electronic device, and a storage medium, to at least solve the problems in the related art that the evaluation accuracy of the face-changing model is low, the evaluation is greatly affected by human factors, and it is difficult to evaluate the face-changing model through a large-scale human image.
The technical scheme of the embodiment of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for evaluating a face-changing model, including:
acquiring at least one binary image group and at least one ternary image group, wherein the binary image group comprises two original character images with different face characteristic data, and the ternary image group comprises three original character images with different face characteristic data;
determining a first face-changing image corresponding to a first person image and a second person image in each binary image group according to a face-changing model, and determining a second face-changing image corresponding to the first face-changing image and the face-changed person image in the binary image group according to the face-changing model; the face changing model is used for executing replacement operation of the face feature data;
calculating a first error of the binary image group according to the second face-changed image and the face-changed person image;
determining a third face changing image corresponding to a third person image and a fourth person image in each three-dimensional image group according to the face changing model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model; the target person image is a face-changed person image in a third person image and a fourth person image;
calculating a second error of the ternary image group according to the fourth face-changed image and the fifth face-changed image;
and determining an evaluation result of the face changing model according to the first error and the second error.
Optionally, the step of determining, according to a face changing model, a first face changing image corresponding to the first person image and the second person image in each of the binary image groups includes:
replacing the face feature data of the first person image in each binary image group with the face feature data of the second person image through the face changing model to obtain a first face changing image; wherein the first person image is the face-changed person image;
correspondingly, the step of determining a second face-changed image corresponding to the first face-changed image and the face-changed person image in the binary image group according to the face-changed model includes:
and replacing the facial feature data of the first face-changed image with the facial feature data of the first human image through the face-changed model to obtain the second face-changed image.
Optionally, the step of calculating a first error of the binary image group according to the second face-changed image and the face-changed person image includes:
and respectively calculating the gray level difference of the corresponding pixels of the second face-changing image and the face-changed person image in each binary image group, and taking the accumulated value of the gray level differences as the first error.
Optionally, the step of determining a third face-changed image corresponding to a third person image and a fourth person image in each of the three-dimensional image groups according to the face-changed model includes:
replacing the facial feature data of the third person image with the facial feature data of the fourth person image through the face changing model to obtain a third face changing image; wherein the third person image is the target person image;
the step of determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the three-dimensional image group according to the face-changed model comprises the following steps:
replacing the facial feature data of the third face-changed image with the facial feature data of the fifth person image through the face-changed model to obtain a fourth face-changed image;
the step of determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model includes:
and replacing the facial feature data of the third person image with the facial feature data of the fifth person image through the face changing model to obtain the fifth face changing image.
Optionally, the step of calculating a second error of the triplet image set according to the fourth face-changed image and the fifth face-changed image includes:
and respectively calculating the gray level difference of the pixels corresponding to the fourth face-changing image and the fifth face-changing image in each ternary image group, and taking the accumulated value of the gray level differences as the second error.
Optionally, the step of determining an evaluation result of the face changing model according to the first error and the second error includes:
and calculating a total error according to the sum of all the first errors and the sum of all the second errors, and determining an evaluation result of the face changing model according to the total error.
According to a second aspect of the embodiments of the present disclosure, there is provided an evaluation apparatus of a face-changing model, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire at least one binary image group and at least one ternary image group, the binary image group comprises two original character images with different face characteristic data, and the ternary image group comprises three original character images with different face characteristic data;
a first determining module configured to determine a first face-changed image corresponding to a first person image and a second person image in each of the binary image groups according to a face-changed model, and determine a second face-changed image corresponding to the first face-changed image and a face-changed person image in the binary image group according to the face-changed model; the face changing model is used for executing replacement operation of the face feature data;
a first error calculation module configured to calculate a first error of the binary image group from the second face-changed image and the face-changed person image;
a second determining module configured to determine a third face-changed image corresponding to a third person image and a fourth person image in each of the three-dimensional image groups according to the face-changed model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model; the target person image is a face-changed person image in a third person image and a fourth person image;
a second error calculation module configured to calculate a second error of the triplet image group from the fourth face-changed image and the fifth face-changed image;
a third determination module configured to determine an evaluation result of the face-changing model according to the first error and the second error.
Optionally, the first determining module includes a first face-changed image determining sub-module and a second face-changed image determining sub-module,
the first face changing image determining submodule is configured to replace the facial feature data of the first person image in each binary image group with the facial feature data of the second person image through the face changing model to obtain a first face changing image; wherein the first person image is the face-changed person image;
the second face-changed image determining submodule is configured to replace the facial feature data of the first face-changed image with the facial feature data of the first person image through the face-changed model to obtain a second face-changed image.
Optionally, the first error calculation module is specifically configured to
And respectively calculating the gray level difference of the corresponding pixels of the second face-changing image and the face-changed person image in each binary image group, and taking the accumulated value of the gray level differences as the first error.
Optionally, the second determining module comprises a third face-changed image determining submodule configured to determine the face-changed image
Replacing the facial feature data of the third person image with the facial feature data of the fourth person image through the face changing model to obtain a third face changing image; wherein the third person image is the target person image;
the second determination module comprises a fourth face-changed image determination submodule configured to
Replacing the facial feature data of the third face-changed image with the facial feature data of the fifth person image through the face-changed model to obtain a fourth face-changed image;
the second determination module comprises a fifth face-changed image determination sub-module configured to
And replacing the facial feature data of the third person image with the facial feature data of the fifth person image through the face changing model to obtain the fifth face changing image.
Optionally, the second error calculation module is specifically configured to
And respectively calculating the gray level difference of the pixels corresponding to the fourth face-changing image and the fifth face-changing image in each ternary image group, and taking the accumulated value of the gray level differences as the second error.
Optionally, the third determining module is specifically configured to
And calculating a total error according to the sum of all the first errors and the sum of all the second errors, and determining an evaluation result of the face changing model according to the total error.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the instructions to implement the evaluation method of the face changing model according to any embodiment of the disclosure.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, where instructions executed by a processor of an electronic device enable the electronic device to perform the evaluation method of a face change model according to any of the embodiments of the present disclosure.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product, wherein when the instructions in the computer program product are executed by a processor of an electronic device, the method for evaluating a face-changing model according to any of the embodiments of the present disclosure is implemented.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: by acquiring at least one binary image group and at least one ternary image group, large-scale original figure images can be acquired, and a basis is provided for evaluating a face changing model on a large scale subsequently; determining a first face-changing image corresponding to a first person image and a second person image in each binary image group according to a face-changing model, and determining a second face-changing image corresponding to the first face-changing image and the face-changed person image in the binary image group according to the face-changing model; calculating a first error of the binary image group according to the second face-changing image and the face-changed person image, namely determining an autoregressive consistency error corresponding to each binary image group, and providing a basis for improving the evaluation accuracy of a face-changing model; determining a third face changing image corresponding to a third person image and a fourth person image in each three-dimensional image group according to the face changing model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model; the target person image is a face-changed person image in a third person image and a fourth person image; calculating a second error of the ternary image group according to the fourth face-changing image and the fifth face-changing image, namely determining an interactive consistency error corresponding to each ternary image group, and further providing a basis for improving the evaluation accuracy of a face-changing model; furthermore, the face changing effect of the face changing model is evaluated according to the first errors and the second errors, the problems that the evaluation accuracy of the face changing model in the related art is low, the influence of human factors is large, and large-scale evaluation is difficult to achieve are solved, the face changing model is evaluated through large-scale human images without the influence of the human factors, and the accuracy of the evaluation of the face changing model is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart illustrating an evaluation method of a face-changing model according to an exemplary embodiment.
Fig. 2A is a flow chart illustrating a method for evaluating a face-changing model according to an exemplary embodiment.
FIG. 2B is a flow chart illustrating a method for evaluating a face-changing model according to an exemplary embodiment.
Fig. 3A is a flow chart illustrating a method for evaluating a face-changing model according to an example embodiment.
FIG. 3B is a flow chart illustrating a method for evaluating a face-changing model according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating a method of evaluating a face-changing model according to an exemplary embodiment.
FIG. 5 is a flow diagram illustrating a method of calculating an autoregressive consistency error for a corresponding set of binary images in accordance with an exemplary embodiment.
FIG. 6 is a flow diagram illustrating a method of calculating an interaction consistency error for a corresponding triplet of images in accordance with an exemplary embodiment.
Fig. 7 is a block diagram illustrating an evaluation apparatus of a face-changing model according to an exemplary embodiment.
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It is noted that the terms "first," "second," and the like, as referred to in this disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure, but merely serve to explain the embodiments of the present invention, and do not limit the embodiments of the present invention.
Aiming at the evaluation of the face changing model, the face changing effect is evaluated in a manual mode at present. The specific method comprises the following steps: and arranging a plurality of persons to perform visual observation evaluation on the face changing effect on a large number of picture triples generated by the face changing technology, marking each triplet with a score, and finally counting the scores marked by all the persons to obtain a final score for evaluating the face changing effect. In this way, manpower consumption is enormous. Multiple people are typically required to score a large number of pictures. The evaluation is easy to lose objectivity. Since the scoring is performed by human eyes, subjectivity is easily introduced. Due to the limitation of manpower, large-scale evaluation is difficult to perform, so that the scene coverage and the accuracy are limited, and the evaluation is inaccurate. Another way is to calculate the Distance between the original person image and the generated face-changed image at the feature level according to FID (frichet inclusion Distance, frichet initial Distance) to represent the difference between two multivariate normal distributions. However, this method can only evaluate the authenticity of the generated face-changed image, and cannot evaluate the consistency between the face-changed image generated by the face-changing model and the original person image.
Fig. 1 is a flowchart illustrating an evaluation method of a face-changing model according to an exemplary embodiment, which may be used in a case of evaluating a face-changing effect of the face-changing model, as shown in fig. 1, and which may be executed by an apparatus of the face-changing model, which may be implemented in software and/or hardware and integrated in an electronic device; in this embodiment, the electronic device may be a computer, a server, a smart phone, a tablet computer, or other electronic devices. Specifically, referring to fig. 1, the method specifically includes the following steps.
In step S11, at least one binary image group and at least one ternary image group are acquired.
The binary image group comprises two original character images with different face characteristic data, and the ternary image group comprises three original character images with different face characteristic data; it should be noted that each original person image referred to in this embodiment includes facial feature data and non-facial feature data, and the facial feature data may include, but is not limited to, eyes, a nose, a forehead, a mouth, ears, and the like; it should be understood that the non-human face feature data is image data other than the human face feature data in the original person image, for example, the image background, the illumination, or the clothes worn by the person, and the like, which is not limited in this embodiment.
For example, each of the binary image groups may include a first personal image and a second personal image, and each of the ternary image groups may include a third personal image, a fourth personal image and a fifth personal image. In the present embodiment, each person image may be in the binary image group or the ternary image group, which is not limited in the present embodiment. For example, the person image a may be included in any one of the binary image groups M, and may be included in any one of the ternary image groups N.
In step S12, a first face-changed image corresponding to the first and second person images in each of the binary image groups is determined based on the face-changed model, and a second face-changed image corresponding to the first face-changed image and the face-changed person image in the binary image group is determined based on the face-changed model.
The face changing model is used for executing the replacement operation of the face feature data. It should be noted that the face-changing model in this embodiment is obtained by training a large number of human images through a deep learning model, and the face-changing model may be directly used on a computer or a server; the present invention may also be a software module integrated in a hardware environment, for example, an animation special effect module integrated in an app (application) of a smart phone, a human face security monitoring module integrated in a camera, and the like, which is not limited in this embodiment.
It should be further noted that, because the solution of this embodiment is mainly to evaluate the face changing effect of the face changing model, in order to ensure consistency, all the face changing models involved in this embodiment are the same face changing model, and thus the advantage of this configuration is that the effect of the face changing model can be accurately evaluated, and there is no phenomenon that a plurality of face changing models are evaluated at the same time.
In an alternative implementation manner of this embodiment, the face changing model may be used to perform the operation of replacing the facial feature data, for example, the face changing model may replace the facial feature data of the original person image B with the facial feature data of the original person image a, at this time, the obtained facial feature data of the new person image C is the same as the original person image a, and the non-facial feature data is the same as the original person image B.
In an optional implementation manner of the embodiment, determining the first face change image corresponding to the first person image and the second person image in each binary image group according to the face change model may include: simultaneously inputting the first person image and the second person image into the face changing model; further, after the face changing model receives the first person image and the second person image, the face feature data of the first person image can be replaced by the face feature data of the second person image to obtain a first face changing image; the face feature data of the second person image can be replaced by the face feature data of the first person image to obtain a first face-changed image; this embodiment is not limited thereto.
It can be understood that the person image with the changed face in the binary image group may be the person image with the replaced face feature data in the binary image group, may be the first person image, and may also be the second person image; for example, if the facial feature data of the first person image is replaced by the facial feature data of the second person image to obtain a first face-changed image, the face-changed person image is the first person image; and if the face feature data of the second person image is replaced by the face feature data of the first person image to obtain a first face-changed image, the face-changed person image is the second person image.
Further, if the face changing model receives the first person image and the second person image, the face feature data of the first person image is replaced by the face feature data of the second person image to obtain a first face changing image, and then a second face changing image corresponding to the first face changing image and the first person image can be further determined according to the face changing model; it is to be understood that, if the face change model receives the first person image and the second person image, and then replaces the facial feature data of the second person image with the facial feature data of the first person image to obtain the first face change image, the second face change image corresponding to the first face change image and the second person image may be further determined according to the face change model.
For example, if the first person image is image a and the second person image is image B, the face feature data of image B may be replaced by the face feature data of image a through the face change model to obtain image C (first face change image), and further, the face feature data of image C may be replaced by the face feature data of image B (face changed person image) through the face change model to obtain image D (second face change image); the face feature data of the image a may also be replaced by the face feature data of the image B through the face change model to obtain an image E (a first face change image), and further, the face feature data of the image E may be replaced by the face feature data of the image a (a face-changed person image) through the face change model to obtain an image F (a second face change image).
In step S13, a first error of the binary image group is calculated from the second face-changed image and the face-changed person image.
Optionally, after the second face-changed image is obtained, a first error of the corresponding binary image group may be further calculated according to the obtained second face-changed image and the first person image, or a first error of the corresponding binary image group may be further calculated according to the obtained second face-changed image and the second person image.
In an alternative implementation manner of the embodiment, in the above example, the gray scale difference of each corresponding pixel of the image D (second face-changed image) and the image B (face-changed person image) may be calculated, and the accumulated value of the gray scale differences of the respective pixels may be used as the first error of the corresponding binary image group; it is also possible to calculate a gray-scale difference for each corresponding pixel of the image F (second face-changed image) and the image a (face-changed person image), and to take the accumulated value of the gray-scale differences for each pixel as the first error of the corresponding binary image group.
In step S14, a third face-changed image corresponding to the third person image and the fourth person image in each triplet group is determined based on the face-changed model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; and determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model.
Wherein the target person image is a face-changed person image in the third person image and the fourth person image.
In an optional implementation manner in this embodiment, a third person image and a fourth person image in the ternary image group may be input to the face changing model to obtain a third face changing image; illustratively, the face feature data of the third person image may be replaced by the face feature data of the fourth person image through the face changing model to obtain a third face changing image; the face feature data of the fourth person image may also be replaced by the face feature data of the third person image through the face changing model to obtain the third face changing image, which is not limited in this embodiment.
Further, a third face-changed image and a fifth person image in the three-dimensional image group can be input into the face-changed model, so that a fourth face-changed image is obtained; further, the target person image and the fifth person image may be input to the face replacement model to obtain a fifth face replacement image.
For example, if the third person image is image a, the fourth person image is image B, and the fifth person image is image C, the face feature data of image B may be replaced by the face feature data of image a through the face change model to obtain image E (third face change image), and further, the face feature data of image E may be replaced by the face feature data of image C through the face change model to obtain image F (fourth face change image); further, the image G (fifth face-changed image) may be obtained by replacing the facial feature data of the image B (target person image) with the facial feature data of the image C by the face-changed model.
In step S15, a second error of the triplet group is calculated from the fourth and fifth face-changed images.
Optionally, after the fourth face-changed image and the fifth face-changed image are obtained, a second error of the corresponding ternary image group may be further calculated according to the fourth face-changed image and the fifth face-changed image.
In an alternative implementation manner of the embodiment, the gray level difference of each corresponding pixel of the fourth face-changed image F and the fifth face-changed image G may be calculated, and the accumulated value of the gray level differences of the pixels may be used as the second error of the corresponding ternary image group.
In step S16, an evaluation result of the face change model is determined based on the first error and the second error.
In an optional implementation manner of the embodiment, after the first error of each binary image group and the second error of each ternary image group are obtained, the evaluation effect of the face changing model can be determined according to each first error and each second error.
In a specific example of this embodiment, an accumulated value of each first error and each second error may be calculated, and the effect of the face changing model is evaluated according to the accumulated value; for example, when the accumulated value is smaller than a set threshold (for example, a numerical value such as 0.1 or 0.2, which is not limited in the present embodiment), it may be determined that the face change effect of the face change model is good.
In another specific example of this embodiment, a weighted average of each first error and each second error may be calculated, and the effect of the face changing model is evaluated according to the weighted average; for example, when the weighted average value is smaller than a set threshold (for example, a numerical value such as 0.1 or 0.05, which is not limited in the present embodiment), it may be determined that the face change effect of the face change model is better.
According to the scheme of the embodiment, by acquiring at least one binary image group and at least one ternary image group, a large-scale original person image can be acquired, and a basis is provided for evaluating a face changing model on a large scale subsequently; determining a first face-changing image corresponding to the first person image and the second person image in each binary image group according to the face-changing model, and determining a second face-changing image corresponding to the first face-changing image and the face-changed person image in the binary image group according to the face-changing model; calculating a first error of the binary image group according to the second face-changing image and the face-changed person image, namely determining an autoregressive consistency error corresponding to each binary image group, and providing a basis for improving the evaluation accuracy of a face-changing model; determining a third face changing image corresponding to a third person image and a fourth person image in each three-dimensional image group according to the face changing model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model; the target person image is a face-changed person image in the third person image and the fourth person image; calculating a second error of the ternary image group according to the fourth face changing image and the fifth face changing image, namely determining an interactive consistency error corresponding to each ternary image group, and further providing a basis for improving the evaluation accuracy of the face changing model; furthermore, the face changing effect of the face changing model is evaluated according to the first errors and the second errors, the problems that the evaluation accuracy of the face changing model in the related art is low, the influence of human factors is large, and large-scale evaluation is difficult to achieve are solved, the face changing model is evaluated through large-scale human images without the influence of the human factors, and the accuracy of the evaluation of the face changing model is improved.
Fig. 2A is a flowchart illustrating an evaluation method of a face-changing model according to an exemplary embodiment, where this embodiment is a further refinement of the above technical solution, and the technical solution in this embodiment may be combined with various alternatives in one or more embodiments described above. As shown in fig. 2A, the evaluation method of the face-changing model includes the following steps.
In step S21, at least one binary image group and at least one ternary image group are acquired.
In step S22, the face feature data of the first person image in each of the binary image groups is replaced with the face feature data of the second person image by the face change model, resulting in a first face change image.
Wherein the first person image is a face-changed person image.
In an optional implementation manner of this embodiment, the face feature data of the first person image is replaced with the face feature data of the second person image by the face changing model, so as to obtain a first face changing image; it can be understood that the facial feature data of the first face-changed image is the same as the facial feature data of the second person image, and the non-facial feature data of the first face-changed image is the same as the non-facial feature data of the first person image.
In step S23, the face feature data of the first face-changed image is replaced with the face feature data of the first human image by the face-changed model, resulting in a second face-changed image.
In an optional implementation manner of this embodiment, after obtaining the first face-changed image, the first face-changed image and the first person image may be further input into the face-changed model, so as to obtain a second face-changed image.
For example, after obtaining the first face-changed image M, the first face-changed image M and the first human image a may be further input into a face-changed model, so as to obtain a second face-changed image D with the same facial feature data as the first human image a and the same non-facial feature data as the first face-changed image M.
In step S24, the grayscale differences of the pixels corresponding to the second face-changed image and the first human image in each binary image group are calculated, and the accumulated value of the grayscale differences is used as the first error.
It should be noted that, if the binary image group includes a first person image a and a second person image B, the second face-changed image is a person image D, the face feature data of the second face-changed image D is the same as that of the first person image a, and the non-face feature data is the same as that of the first face-changed image M; because the non-human face feature data of the first face-changed image M is the same as the first human image A; then, the non-face feature data of the second face-changed image D should be theoretically the same as the first person image a, that is, the face feature data and the non-face feature data of the second face-changed image D are theoretically the same as the first person image a; however, the facial feature data and non-facial feature data of the second image D are not exactly the same as those of the first human image a, and there is a certain error, which is referred to as a first error corresponding to the binary image set in this embodiment, and the first error involved in this embodiment may also be referred to as an auto-regressive consistency error.
In an optional implementation manner of this embodiment, after the second face change images of the binary image groups are obtained, gray differences of corresponding pixels of the second face change images and the first person image may be respectively calculated, and an accumulated value of the gray differences may be used as the first error of the corresponding binary image group.
For example, after obtaining the second face-changed image D, the grayscale differences of the pixels of the second face-changed image and the pixels of the first person image a may be calculated, for example, the second face-changed image D includes 256 pixels, and the first person image a also includes 256 pixels, and further, the grayscale differences of the pixels of the second face-changed image D and the pixels of the first person image a may be calculated respectively, and the accumulated value of the grayscale differences of the respective pixels may be used as the first error of the corresponding binary image group. It should be noted that, in this embodiment, the pixels at the same positions in the two images are referred to as corresponding pixels, for example, the pixel (0, 0) in the second face-changing image D and the pixel (0, 0) in the first human image a are corresponding pixels; the pixel (2, 2) in the second face-changed image D and the pixel (2, 2) in the first human image a are corresponding pixels.
In step S25, a third face-changed image corresponding to the third person image and the fourth person image in each triplet group is determined based on the face-changed model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; and determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model.
In step S26, a second error of the triplet group is calculated from the fourth and fifth face-changed images.
In step S27, an evaluation result of the face change model is determined based on the first error and the second error.
According to the scheme of the embodiment, the face feature data of the first person image in each binary image group is replaced by the face feature data of the second person image through the face changing model, so that a first face changing image is obtained; replacing the facial feature data of the first face-changed image with the facial feature data of the first human image (the face-changed human image) through the face-changed model to obtain a second face-changed image; respectively calculating the gray level difference of corresponding pixels of the second face-changing image and the first person image in each binary image group, and taking the accumulated value of each gray level difference as a first error; and a basis is provided for subsequently improving the evaluation accuracy of the face changing model.
Fig. 2B is a flowchart illustrating an evaluation method of a face-changing model according to an exemplary embodiment, where this embodiment is a further refinement of the above technical solution, and the technical solution in this embodiment may be combined with various alternatives in one or more embodiments described above. As shown in fig. 2B, the evaluation method of the face-changing model includes the following steps.
In step S210, at least one binary image group and at least one ternary image group are acquired.
In step S220, the face feature data of the second person image in each binary image group is replaced by the face feature data of the first person image through the face changing model, so as to obtain a first face changing image.
Wherein the second person image is a face-changed person image.
In step S230, the face feature data of the first face-changed image is replaced by the face feature data of the second person image through the face-changed model, so as to obtain a second face-changed image.
In step S240, the grayscale differences of the pixels corresponding to the second face-changed image and the second human image in each binary image group are calculated, respectively, and the accumulated value of the grayscale differences is taken as the first error.
In step S250, a third face-changed image corresponding to the third person image and the fourth person image in each of the three-dimensional image groups is determined according to the face-changed model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; and determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model.
In step S260, a second error of the triplet set is calculated from the fourth and fifth face-changed images.
In step S270, an evaluation result of the face replacement model is determined based on the first error and the second error.
It should be noted that the steps involved in this embodiment are the same as those in the above embodiment, and the difference is only that the face feature data of the second person image in each binary image group is replaced by the face feature data of the first person image through the face replacement model to obtain the first face replacement image, and the subsequent steps, so that details of each step will not be repeated in this embodiment.
According to the scheme of the embodiment, the face feature data of the second person image in each binary image group is replaced by the face feature data of the first person image through the face changing model, so that a first face changing image is obtained; replacing the facial feature data of the first face-changing image with the facial feature data of a second person image (face-changed person image) through a face-changing model to obtain a second face-changing image; respectively calculating the gray level difference of corresponding pixels of the second face-changing image and the second person image of each binary image group, and taking the accumulated value of each gray level difference as a first error; and a basis is provided for subsequently improving the evaluation accuracy of the face changing model.
Fig. 3A is a flowchart illustrating an evaluation method of a face-changing model according to an exemplary embodiment, where this embodiment is a further refinement of the above technical solution, and the technical solution in this embodiment may be combined with various alternatives in one or more embodiments described above. As shown in fig. 3A, the evaluation method of the face-changing model includes the following steps.
In step S31, at least one binary image group and at least one ternary image group are acquired.
In step S32, a first face-changed image corresponding to the first and second person images in each of the binary image groups is determined based on the face-changed model, and a second face-changed image corresponding to the first face-changed image and the face-changed person image in the binary image group is determined based on the face-changed model.
In step S33, a first error of the binary image group is calculated from the second face-changed image and the face-changed person image.
In step S34, the face feature data of the third person image is replaced with the face feature data of the fourth person image by the face replacement model, resulting in a third face replacement image.
In an optional implementation manner of this embodiment, the face feature data of the third person image is replaced with the face feature data of the fourth person image by the face change model, so as to obtain a third face change image; it is understood that the facial feature data of the third face-changed image is the same as the facial feature data of the fourth person image, and the non-facial feature data of the third face-changed image is the same as the non-facial feature data of the third person image.
In step S35, the face feature data of the third face-changed image is replaced with the face feature data of the fifth person image by the face-changed model, resulting in a fourth face-changed image.
In an optional implementation manner of this embodiment, the face feature data of the third face-changed image is replaced with the face feature data of the fifth person image by the face-changed model, so as to obtain a fourth face-changed image; it is understood that the facial feature data of the fourth face-changed image is the same as the facial feature data of the fifth person image, and the non-facial feature data of the fourth face-changed image is the same as the facial feature data of the third face-changed image.
In step S36, a fifth face replacement image is obtained by replacing the facial feature data of the third personal image with the facial feature data of the fifth personal image by the face replacement model.
Wherein the third person image is a target person image.
In an optional implementation manner of this embodiment, after the facial feature data of the third person image is replaced with the facial feature data of the fourth person image to obtain a third face-changed image, the facial feature data of the third person image may be replaced with the facial feature data of the fifth person image by the face-changed model to obtain a fifth face-changed image; it is understood that the facial feature data of the fifth face-changed image is the same as the facial feature data of the fourth person image, and the non-facial feature data of the fifth face-changed image is the same as the non-facial feature data of the third person image.
In step S37, the grayscale differences of the pixels corresponding to the fourth and fifth face-changed images in each of the triplet groups are calculated, and the accumulated value of the grayscale differences is used as the second error.
It should be noted that, because the face feature data of the third face-changed image E is the same as the fourth person image B, the non-face feature data is the same as the third person image a; the face feature data of the fourth face-changed image F is the same as the fifth person image C, and the non-face feature data is the same as the third face-changed image E (third person image a); since the non-human face feature data of the third face-changed image E is the same as the third person image a, the non-human face feature data of the fourth face-changed image F is theoretically the same as the third person image a; in a similar way, theoretically, the face feature data of the fifth face changing image G is the same as the fifth person image C, and the non-face feature data is the same as the third person image a; that is, theoretically, the fourth face-changed image F and the fifth face-changed image G should be identical, but actually, a certain error exists between the fourth face-changed image F and the fifth face-changed image G, and this error is referred to as a second error corresponding to the triplet group in this embodiment, and the second error involved in this embodiment may also be referred to as an interactive error.
In an optional implementation manner of this embodiment, after the fourth face-changed image and the fifth face-changed image of each ternary image group are obtained, gray differences of corresponding pixels of each fourth face-changed image and each fifth face-changed image may be respectively calculated, and an accumulated value of each gray difference may be used as the second error of the corresponding ternary image group.
For example, after the fourth face-changed image F and the fifth face-changed image G are obtained, the gray scale differences of corresponding pixels of the fourth face-changed image F and the fifth face-changed image G may be calculated, for example, the fourth face-changed image F includes 512 pixels, the fifth face-changed image G also includes 256 pixels, further, the gray scale differences of corresponding pixels of the fourth face-changed image F and the fifth face-changed image G may be calculated respectively, and the accumulated value of the gray scale differences of the respective pixels may be used as the second error of the corresponding ternary image group.
In step S38, an evaluation result of the face change model is determined based on the first error and the second error.
According to the scheme of the embodiment, the face feature data of the third person image is replaced by the face feature data of the fourth person image through the face changing model, so that a third face changing image is obtained; replacing the facial feature data of the third face-changed image with the facial feature data of the fifth person image through the face-changed model to obtain a fourth face-changed image; replacing the facial feature data of the third person image (the target person image) with the facial feature data of the fifth person image through the face changing model to obtain a fifth face changing image; respectively calculating the gray level difference of corresponding pixels of the fourth face-changing image and the fifth face-changing image of each ternary image group, and taking the accumulated value of each gray level difference as a second error; and further providing a basis for subsequently improving the evaluation accuracy of the face changing model.
Fig. 3B is a flowchart illustrating an evaluation method of a face-changing model according to an exemplary embodiment, where this embodiment is a further refinement of the above technical solution, and the technical solution in this embodiment may be combined with various alternatives in one or more embodiments described above. As shown in fig. 3B, the evaluation method of the face-changing model includes the following steps.
In step S310, at least one binary image group and at least one ternary image group are acquired.
In step S320, a first face-changed image corresponding to the first person image and the second person image in each of the binary image groups is determined according to the face-changed model, and a second face-changed image corresponding to the first face-changed image and the face-changed person image in the binary image group is determined according to the face-changed model.
In step S330, a first error of the binary image group is calculated from the second face-changed image and the face-changed person image.
In step S340, the face feature data of the fourth person image is replaced by the face feature data of the third person image through the face changing model, so as to obtain a third face changing image.
In step S350, the face feature data of the third face-changed image is replaced by the face feature data of the fifth person image through the face-changed model, so as to obtain a fourth face-changed image.
In step S360, the face feature data of the fourth person image is replaced with the face feature data of the fifth person image by the face replacement model, so as to obtain a fifth face replacement image.
Wherein the fourth person image is a target person image.
In step S370, the gray level differences of the pixels corresponding to the fourth face-changed image and the fifth face-changed image in each of the three-dimensional image groups are calculated, and the accumulated value of the gray level differences is used as the second error.
In step S380, an evaluation result of the face change model is determined based on the first error and the second error.
It should be noted that the steps involved in this embodiment are the same as those in the above embodiment, and the difference is only that the face feature data of the fourth person image is replaced by the face feature data of the third person image through the face changing model to obtain the third face changing image, and the subsequent steps, so that details of each step will not be repeated in this embodiment.
In the scheme of this embodiment, the face feature data of the fourth person image is replaced with the face feature data of the third person image through the face changing model, so as to obtain a third face changing image; replacing the facial feature data of the third face-changed image with the facial feature data of the fifth person image through the face-changed model to obtain a fourth face-changed image; replacing the facial feature data of the fourth character image (target character image) with the facial feature data of the fifth character image through the face changing model to obtain a fifth face changing image; respectively calculating the gray level difference of corresponding pixels of the fourth face-changing image and the fifth face-changing image of each ternary image group, and taking the accumulated value of each gray level difference as a second error; and a basis is provided for subsequently improving the evaluation accuracy of the face changing model.
Fig. 4 is a flowchart illustrating an evaluation method of a face-changing model according to an exemplary embodiment, where this embodiment is a further refinement of the above technical solution, and the technical solution in this embodiment may be combined with various alternatives in one or more embodiments described above. As shown in fig. 4, the evaluation method of the face-changing model includes the following steps.
In step S41, at least one binary image group and at least one ternary image group are acquired.
In step S42, a first face-changed image corresponding to the first and second person images in each of the binary image groups is determined based on the face-changed model, and a second face-changed image corresponding to the first face-changed image and the face-changed person image in the binary image group is determined based on the face-changed model.
In step S43, a first error of the binary image group is calculated from the second face-changed image and the face-changed person image.
In step S44, a third face-changed image corresponding to the third person image and the fourth person image in each triplet group is determined based on the face-changed model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; and determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model.
In step S45, a second error of the triplet group is calculated from the fourth and fifth face-changed images.
In step S46, a total error is calculated from the sum of all the first errors and the sum of all the second errors, and an evaluation result of the face change model is determined from the total error.
In an optional implementation manner of this embodiment, after obtaining the first errors and the second errors, a total error of the first errors and the second errors may be further calculated, that is, all the first errors and the second errors are accumulated to obtain a total error.
Further, an evaluation result of the face changing model can be determined according to the calculated total error; in an optional implementation manner of the embodiment, when the total error is less than or equal to the set threshold, it is determined that the face changing model satisfies the online condition; and when the total error is larger than a set threshold value, determining that the face changing model does not meet the online condition.
The set threshold value in this embodiment may be a numerical value such as 0.2, 0.15, or 0.1, which is not limited in this embodiment.
For example, the first errors and the second errors are accumulated to obtain a total error of 0.02 which is smaller than a set threshold (0.2), at this time, it may be determined that the face change model meets an online condition, that is, the face change model may be subjected to online processing, so that the face change model is normally used, for example, an animation special effect, human face safety monitoring, and the like may be implemented by the face change model after online, which is not limited in this embodiment.
According to the scheme of the embodiment, after all the first errors and all the second errors are obtained, the total error can be further calculated according to the sum of all the first errors and the sum of all the second errors, and the evaluation result of the face changing model can be determined according to the total error, so that the objective evaluation on the face changing model is realized, and the influence of human factors is avoided.
In order to make those skilled in the art better understand the evaluation method of the face-changing model in this embodiment, a specific example is used for description below, and the specific process includes:
1. a person image data set is obtained, wherein the person image data set comprises at least one binary image group and at least one ternary image group.
2. And calculating the autoregressive consistency error of each corresponding binary image group.
FIG. 5 is a flow chart illustrating a method for calculating an autoregressive consistency error for a corresponding set of binary images, according to one exemplary embodiment, as shown in FIG. 5, which generally includes the following steps.
In step S51, the original person image a and the original person image B are input to the face model, and the first face-changed image M is obtained.
In step S52, the original person image B and the first face-changed image M are input to the face-changed model, resulting in a second face-changed image D.
In step S53, the grayscale differences of the corresponding pixels of the second face-changed image D and the original human image B are calculated, and the accumulated value of each grayscale difference is used as the autoregressive consistency error of the corresponding binary image group.
In the above example, the facial feature data of the original person image a is transformed into the original person image B to obtain the first face-changed image M, then the facial feature data of the original person image B is transformed into the first face-changed image M to obtain the second face-changed image D, and the pixel difference between the second face-changed image D and the original person image B is calculated. Since the first face-changed image M includes the face feature data of the original person image a and the non-face feature data of the original person image B, and the original person image B is changed to the first face-changed image M to obtain the second face-changed image D, the second face-changed image D includes the face feature data of the original person image B and the non-face feature data of the original person image B, and thus the second face-changed image D should be equal to the original person image B. Further, an auto-regression consistency error may be calculated according to a gray difference between corresponding pixels of the second face-changed image D and the original person image B.
3. And calculating the interactive consistency error of each corresponding ternary image group.
FIG. 6 is a flowchart illustrating a method for calculating an interaction consistency error for a corresponding triplet of images, according to an exemplary embodiment, and as shown in FIG. 6, the method generally includes the following steps.
In step S61, the original person image a and the original person image B are input to the face model, and the first face-changed image M is obtained.
In step S62, the original person image C and the first face-changed image M are input to the face-changed model, and a third face-changed image E is obtained.
In step S63, the original person image B and the original person image C are input to the face model, and a fourth face-changed image F is obtained.
In step S64, the grayscale differences of the corresponding pixels of the third face-changed image E and the fourth face-changed image F are calculated, and the accumulated value of the grayscale differences is used as the interactive consistency error of the corresponding triplet group.
In the above example, the face feature data of the original person image a is converted into the original person image B to obtain a first face-changed image M, and then the face feature data of the original person image C is converted into the first face-changed image M to obtain a third face-changed image E; the face feature data of the original person image C is converted into an original person image B, and a fourth face conversion image F is obtained; the pixel difference between the third face-changed image E and the fourth face-changed image F is calculated. The first face changing image M comprises face characteristic data of the original person image A and non-face characteristic data of the original person image; the fourth face-changed image F also includes the facial feature data of the original person image C and the non-facial feature data of the original person image B, so that the third face-changed image is desirably equal to the fourth face-changed image, and the pixel difference between the third face-changed image E and the fourth face-changed image F is calculated, so that the interactive consistency error of the corresponding triplet image group can be calculated.
4. And calculating a total error according to the respective regression consistency errors and the respective interaction consistency errors, and realizing the numerical evaluation of the face changing effect of the face changing model according to the total error.
According to the scheme of the embodiment, a person image data set can be obtained, wherein the person image data set comprises at least one binary image group and at least one ternary image group; calculating the autoregressive consistency error of each corresponding binary image group; calculating the interactive consistency error of each corresponding ternary image group; the total error is calculated according to the respective regression consistency errors and the respective interaction consistency errors, the numerical evaluation of the face changing effect of the face changing model is realized according to the total error, the manual evaluation can be replaced, a large amount of manpower is saved, and the objectivity, the scene coverage and the accuracy of the evaluation of the face changing model are greatly improved.
Fig. 7 is a block diagram illustrating an evaluation apparatus of a face-changing model according to an exemplary embodiment. Referring to fig. 7, the apparatus includes an acquisition module 71, a first determination module 72, a first error calculation module 73, a second determination module 74, a second error calculation module 75, and a third determination module 76.
An obtaining module 71, configured to obtain at least one binary image group and at least one ternary image group, where the binary image group includes two original person images with different face feature data, and the ternary image group includes three original person images with different face feature data;
a first determining module 72 configured to determine a first face-changed image corresponding to the first and second person images in each of the binary image groups according to the face-changed model, and determine a second face-changed image corresponding to the first face-changed image and the face-changed person image in the binary image group according to the face-changed model; the face changing model is used for executing the replacement operation of the face feature data;
a first error calculation module 73 configured to calculate a first error of the binary image group from the second face-changed image and the face-changed person image;
a second determining module 74 configured to determine a third face-changed image corresponding to the third person image and the fourth person image in each of the triplet groups according to the face-changed model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model;
a second error calculation module 75 configured to calculate a second error of the triplet group from the fourth face-changed image and the fifth face-changed image;
a third determination module 76 configured to determine an evaluation result of the face-changing model based on the first error and the second error.
Optionally, the first determining module 72, including a first face-changed image determining sub-module and a second face-changed image determining sub-module,
the first face changing image determining submodule is configured to replace the face feature data of the first person image in each binary image group with the face feature data of the second person image through a face changing model to obtain a first face changing image;
correspondingly, the second face-changed image determining submodule is configured to replace the facial feature data of the first face-changed image with the facial feature data of the first person image through the face-changed model to obtain a second face-changed image.
Optionally, the first error calculation module 73 is specifically configured to
And respectively calculating the gray level difference of the corresponding pixels of the second face-changing image and the face-changed person image in each binary image group, and taking the accumulated value of the gray level differences as a first error.
Optionally, the second determining module 74 comprises a third face-changed image determining sub-module configured to determine a face-changed image
Replacing the face feature data of the third person image with the face feature data of the fourth person image through the face changing model to obtain a third face changing image;
the second determination module comprises a fourth face-changed image determination submodule configured to
Replacing the facial feature data of the third face-changed image with the facial feature data of the fifth person image through the face-changed model to obtain a fourth face-changed image;
the second determination module includes a fifth face-changed image determination sub-module configured to
And replacing the facial feature data of the third person image with the facial feature data of the fifth person image through the face changing model to obtain a fifth face changing image.
Optionally, the second error calculation module 75 is specifically configured to
And respectively calculating the gray level difference of corresponding pixels of the fourth face-changing image and the fifth face-changing image of each ternary image group, and taking the accumulated value of the gray level differences as a second error.
Optionally, the third determination module 76 is specifically configured to
And calculating a total error according to the sum of all the first errors and the sum of all the second errors, and determining an evaluation result of the face changing model according to the total error.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating a structure of an electronic device according to an example embodiment. As shown in fig. 8, the server includes a processor 81; a Memory 82 for storing executable instructions for the processor 81, the Memory 82 may include a Random Access Memory (RAM) and a Read-Only Memory (ROM); wherein, the processor 81 is configured to execute instructions to implement the evaluation method of the face-changing model, namely:
acquiring at least one binary image group and at least one ternary image group, wherein the binary image group comprises two original character images with different face characteristic data, and the ternary image group comprises three original character images with different face characteristic data;
determining a first face-changing image corresponding to the first person image and the second person image in each binary image group according to the face-changing model, and determining a second face-changing image corresponding to the first face-changing image and the face-changed person image in the binary image group according to the face-changing model; the face changing model is used for executing the replacement operation of the face feature data;
calculating a first error of the binary image group according to the second face-changed image and the face-changed person image;
determining a third face changing image corresponding to a third person image and a fourth person image in each three-dimensional image group according to the face changing model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model; the target person image is a face-changed person image in the third person image and the fourth person image;
calculating a second error of the ternary image group according to the fourth face-changing image and the fifth face-changing image;
and determining an evaluation result of the face changing model according to the first error and the second error.
In an exemplary embodiment, a storage medium including instructions, such as a memory 82 storing executable instructions, which can be executed by a processor 81 of an electronic device (server or smart terminal) to perform the above evaluation method of the face-changing model is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is further provided, and when instructions in the computer program product are executed by a processor of an electronic device (a server or a smart terminal), the evaluation method of the face changing model is implemented.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. The specification and examples are to be regarded in an illustrative rather than a restrictive sense.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof.

Claims (14)

1. A method for evaluating a face-changing model is characterized by comprising the following steps:
acquiring at least one binary image group and at least one ternary image group, wherein the binary image group comprises two original character images with different face characteristic data, and the ternary image group comprises three original character images with different face characteristic data;
determining a first face-changing image corresponding to a first person image and a second person image in each binary image group according to a face-changing model, and determining a second face-changing image corresponding to the first face-changing image and the face-changed person image in the binary image group according to the face-changing model; the face changing model is used for executing replacement operation of the face feature data;
calculating a first error of the binary image group according to the second face-changed image and the face-changed person image;
determining a third face changing image corresponding to a third person image and a fourth person image in each three-dimensional image group according to the face changing model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model; the target person image is a face-changed person image in a third person image and a fourth person image;
calculating a second error of the ternary image group according to the fourth face-changed image and the fifth face-changed image;
and determining an evaluation result of the face changing model according to the first error and the second error.
2. The method according to claim 1, wherein the step of determining a first face-changed image corresponding to the first and second human images in each of the binary image sets according to a face-changed model comprises:
replacing the face feature data of the first person image in each binary image group with the face feature data of the second person image through the face changing model to obtain a first face changing image; wherein the first person image is the face-changed person image;
correspondingly, the step of determining a second face-changed image corresponding to the first face-changed image and the face-changed person image in the binary image group according to the face-changed model includes:
and replacing the facial feature data of the first face-changed image with the facial feature data of the first human image through the face-changed model to obtain the second face-changed image.
3. The method according to claim 1, wherein said calculating a first error for said set of binary images from said second face-changed image and said face-changed person image comprises:
and respectively calculating the gray level difference of the corresponding pixels of the second face-changing image and the face-changed person image in each binary image group, and taking the accumulated value of the gray level differences as the first error.
4. The method according to claim 1, wherein the step of determining a third face-changed image corresponding to a third human image and a fourth human image in each of the three-dimensional image groups according to the face-changed model comprises:
replacing the facial feature data of the third person image with the facial feature data of the fourth person image through the face changing model to obtain a third face changing image; wherein the third person image is the target person image;
the step of determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the three-dimensional image group according to the face-changed model comprises the following steps:
replacing the facial feature data of the third face-changed image with the facial feature data of the fifth person image through the face-changed model to obtain a fourth face-changed image;
the step of determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model includes:
and replacing the facial feature data of the third person image with the facial feature data of the fifth person image through the face changing model to obtain the fifth face changing image.
5. The method according to claim 1, wherein the step of calculating the second error of the triplet set from the fourth and fifth face-changed images comprises:
and respectively calculating the gray level difference of the pixels corresponding to the fourth face-changing image and the fifth face-changing image in each ternary image group, and taking the accumulated value of the gray level differences as the second error.
6. The method according to claim 1, wherein the step of determining the evaluation result of the face-changing model according to the first error and the second error comprises:
and calculating a total error according to the sum of all the first errors and the sum of all the second errors, and determining an evaluation result of the face changing model according to the total error.
7. An evaluation apparatus for a face-changing model, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is configured to acquire at least one binary image group and at least one ternary image group, the binary image group comprises two original character images with different face characteristic data, and the ternary image group comprises three original character images with different face characteristic data;
a first determining module configured to determine a first face-changed image corresponding to a first person image and a second person image in each of the binary image groups according to a face-changed model, and determine a second face-changed image corresponding to the first face-changed image and a face-changed person image in the binary image group according to the face-changed model; the face changing model is used for executing replacement operation of the face feature data;
a first error calculation module configured to calculate a first error of the binary image group from the second face-changed image and the face-changed person image;
a second determining module configured to determine a third face-changed image corresponding to a third person image and a fourth person image in each of the three-dimensional image groups according to the face-changed model; determining a fourth face-changed image corresponding to the third face-changed image and a fifth person image in the ternary image group according to the face-changed model; determining a fifth face-changed image corresponding to the target person image and the fifth person image according to the face-changed model; the target person image is a face-changed person image in a third person image and a fourth person image;
a second error calculation module configured to calculate a second error of the triplet image group from the fourth face-changed image and the fifth face-changed image;
a third determination module configured to determine an evaluation result of the face-changing model according to the first error and the second error.
8. The apparatus of claim 7, wherein the first determination module comprises a first trade image determination sub-module and a second trade image determination sub-module,
the first face changing image determining submodule is configured to replace the facial feature data of the first person image in each binary image group with the facial feature data of the second person image through the face changing model to obtain a first face changing image; wherein the first person image is the face-changed person image;
the second face-changed image determining submodule is configured to replace the facial feature data of the first face-changed image with the facial feature data of the first person image through the face-changed model to obtain a second face-changed image.
9. The apparatus of claim 7, wherein the first error calculation module is specifically configured to
And respectively calculating the gray level difference of the corresponding pixels of the second face-changing image and the face-changed person image in each binary image group, and taking the accumulated value of the gray level differences as the first error.
10. The apparatus of claim 7, wherein the second determination module comprises a third face-changed image determination sub-module configured to determine a face-changed image
Replacing the facial feature data of the third person image with the facial feature data of the fourth person image through the face changing model to obtain a third face changing image; wherein the third person image is the target person image;
the second determination module comprises a fourth face-changed image determination submodule configured to
Replacing the facial feature data of the third face-changed image with the facial feature data of the fifth person image through the face-changed model to obtain a fourth face-changed image;
the second determination module comprises a fifth face-changed image determination sub-module configured to
And replacing the facial feature data of the third person image with the facial feature data of the fifth person image through the face changing model to obtain the fifth face changing image.
11. The apparatus of claim 7, wherein the second error calculation module is specifically configured to
And respectively calculating the gray level difference of the pixels corresponding to the fourth face-changing image and the fifth face-changing image in each ternary image group, and taking the accumulated value of the gray level differences as the second error.
12. The apparatus according to claim 7, wherein the third determination module is specifically configured to
And calculating a total error according to the sum of all the first errors and the sum of all the second errors, and determining an evaluation result of the face changing model according to the total error.
13. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to execute the instructions to implement the evaluation method of the face-changing model according to any one of claims 1 to 6.
14. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the evaluation method of the face-changing model according to any one of claims 1 to 6.
CN202011259430.7A 2020-11-12 2020-11-12 Evaluation method and device of face changing model, electronic equipment and storage medium Active CN112070662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011259430.7A CN112070662B (en) 2020-11-12 2020-11-12 Evaluation method and device of face changing model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011259430.7A CN112070662B (en) 2020-11-12 2020-11-12 Evaluation method and device of face changing model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112070662A true CN112070662A (en) 2020-12-11
CN112070662B CN112070662B (en) 2021-02-26

Family

ID=73655034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011259430.7A Active CN112070662B (en) 2020-11-12 2020-11-12 Evaluation method and device of face changing model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112070662B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171198A (en) * 2022-09-02 2022-10-11 腾讯科技(深圳)有限公司 Model quality evaluation method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002358335A (en) * 2001-05-31 2002-12-13 Nec Corp Method, program and system for analysis in finite element method
CN109492540A (en) * 2018-10-18 2019-03-19 北京达佳互联信息技术有限公司 Face exchange method, apparatus and electronic equipment in a kind of image
CN111291863A (en) * 2020-01-20 2020-06-16 腾讯科技(深圳)有限公司 Training method of face changing identification model, face changing identification method, device and equipment
CN111523413A (en) * 2020-04-10 2020-08-11 北京百度网讯科技有限公司 Method and device for generating face image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002358335A (en) * 2001-05-31 2002-12-13 Nec Corp Method, program and system for analysis in finite element method
CN109492540A (en) * 2018-10-18 2019-03-19 北京达佳互联信息技术有限公司 Face exchange method, apparatus and electronic equipment in a kind of image
CN111291863A (en) * 2020-01-20 2020-06-16 腾讯科技(深圳)有限公司 Training method of face changing identification model, face changing identification method, device and equipment
CN111523413A (en) * 2020-04-10 2020-08-11 北京百度网讯科技有限公司 Method and device for generating face image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171198A (en) * 2022-09-02 2022-10-11 腾讯科技(深圳)有限公司 Model quality evaluation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112070662B (en) 2021-02-26

Similar Documents

Publication Publication Date Title
Pabba et al. An intelligent system for monitoring students' engagement in large classroom teaching through facial expression recognition
CN111709409B (en) Face living body detection method, device, equipment and medium
Abudarham et al. Reverse engineering the face space: Discovering the critical features for face identification
CN104424634B (en) Object tracking method and device
CN108765423B (en) Convolutional neural network training method and device
TW202004637A (en) Risk prediction method and apparatus, storage medium, and server
Zhou et al. Utilizing dictionary learning and machine learning for blind quality assessment of 3-D images
CN111768336B (en) Face image processing method and device, computer equipment and storage medium
CN110781976B (en) Extension method of training image, training method and related device
CN109145871A (en) Psychology and behavior recognition methods, device and storage medium
CN110298569A (en) Learning evaluation method and device based on eye movement identification
CN115205764B (en) Online learning concentration monitoring method, system and medium based on machine vision
CN112070662B (en) Evaluation method and device of face changing model, electronic equipment and storage medium
CN116862166A (en) Post matching method, device, equipment and computer storage medium
CN109740527B (en) Image processing method in video frame
Cao et al. Intelligent physical education teaching tracking system based on multimedia data analysis and artificial intelligence
Pillai Student Engagement Detection in Classrooms through Computer Vision and Deep Learning: A Novel Approach Using YOLOv4
CN110163049B (en) Face attribute prediction method, device and storage medium
CN110399818A (en) A kind of method and apparatus of risk profile
CN111860357B (en) Attendance rate calculating method and device based on living body identification, terminal and storage medium
CN114049676A (en) Fatigue state detection method, device, equipment and storage medium
JP2015001859A (en) Information processing apparatus, information processing system, and program
CN110148075A (en) A kind of learning evaluation method and device based on artificial intelligence
CN110297539A (en) A kind of eye movement recognition methods and device based on artificial intelligence
Jiang et al. Image saliency detection with sparse representation of learnt texture atoms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant