CN109492540A - Face exchange method, apparatus and electronic equipment in a kind of image - Google Patents

Face exchange method, apparatus and electronic equipment in a kind of image Download PDF

Info

Publication number
CN109492540A
CN109492540A CN201811214643.0A CN201811214643A CN109492540A CN 109492540 A CN109492540 A CN 109492540A CN 201811214643 A CN201811214643 A CN 201811214643A CN 109492540 A CN109492540 A CN 109492540A
Authority
CN
China
Prior art keywords
image
standard
face
corresponding relationship
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811214643.0A
Other languages
Chinese (zh)
Other versions
CN109492540B (en
Inventor
刘锦龙
陶志奇
郑文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201811214643.0A priority Critical patent/CN109492540B/en
Publication of CN109492540A publication Critical patent/CN109492540A/en
Application granted granted Critical
Publication of CN109492540B publication Critical patent/CN109492540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The embodiment of the invention provides the face exchange method, apparatus and electronic equipment in a kind of image, which comprises obtains the first image and the second image, the first image and the second image include face;The first image and the second image are handled by the deep neural network model being obtained ahead of time, obtain the first image and corresponding first standard three-dimensional faceform of the second image and the second standard three-dimensional faceform;According to the corresponding relationship of the coordinate points of pixel and the first standard three-dimensional faceform in the first image, and second the coordinate points of pixel and the second standard three-dimensional faceform in image corresponding relationship, the face characteristic of first image and the face characteristic of the second image are swapped, third image and the 4th image are obtained.Obtained the first standard three-dimensional faceform and the second standard three-dimensional faceform is accurate, and including expressive features, so that the face in finally obtained third image and the 4th image is accurate and more natural.

Description

Face exchange method, apparatus and electronic equipment in a kind of image
Technical field
The present invention relates to technical field of image processing, more particularly in a kind of image face exchange method, apparatus and Electronic equipment.
Background technique
In recent years, face switching technology was used widely and fast-developing, brought many enjoyment to people's lives.People Face exchange is to swap the face of two people in image, and other positions of body and image background do not change. The face of two people can be in same image, can also be in different images.
The process of face exchange mainly comes out the feature extraction of face in image, and feature is put into target face On corresponding position, to realize that face exchanges.Current face switching technology commonly using key point calibration by the way of realize, That is needing 68 key points in the facial image swapped to be marked for two first, then closed by these The alignment that key point carries out two facial images in turn swaps face characteristic, that is, by the pixel value of these key points It swaps and completion, to realize the purpose of face exchange.
In above-mentioned face exchange process, due to being only marked 68 key points of face, and actually facial image Key point it is very more, so face exchange effect be inaccurate, and exchange after facial image seem very unnatural.
Summary of the invention
The face exchange method, apparatus being designed to provide in a kind of image and electronic equipment of the embodiment of the present invention, with Keep face exchange effect more natural, accurate.Specific technical solution is as follows:
The embodiment of the present invention in a first aspect, providing a kind of face exchange method in image, which comprises
Obtain the first image and the second image, wherein the first image and second image include face;
The first image and second image are handled by the deep neural network model being obtained ahead of time, obtained To the first image and corresponding first standard three-dimensional faceform of second image and the second standard three-dimensional face Model;Wherein, the deep neural network model is to be obtained based on the facial image sample training obtained in advance, the depth Neural network model includes the corresponding relationship of characteristics of image and standard three-dimensional faceform, and described image feature includes face characteristic And expressive features;
According to the corresponding relationship of pixel in the first image and the coordinate points of first standard three-dimensional faceform, And in second image coordinate points of pixel and second standard three-dimensional faceform corresponding relationship, by described the The face characteristic of one image and the face characteristic of second image swap, and obtain third image and the 4th image.
In one embodiment, the training method of the deep neural network model, comprising:
Obtain predetermined depth neural network model;
Obtain multiple facial image samples and corresponding standard three-dimensional faceform, wherein the standard faces model Including expressive features;
Mark the characteristics of image in the multiple facial image sample, wherein described image feature include face characteristic and Expressive features;
By the facial image sample and the corresponding standard three-dimensional faceform input predetermined depth nerve after label Network model is trained the predetermined depth neural network model;
When the accuracy of the output result of the predetermined depth neural network model reaches preset value or the facial image When the training the number of iterations of sample reaches preset times, deconditioning obtains the deep neural network model.
In one embodiment, described according to pixel in the first image and the first standard three-dimensional face mould The coordinate of pixel and second standard three-dimensional faceform in the corresponding relationship of the coordinate points of type and second image The corresponding relationship of point, the face characteristic of the first image and the face characteristic of second image are swapped, and obtain the The step of three images and four images, comprising:
The coordinate points for determining first standard three-dimensional faceform and the seat in second standard three-dimensional faceform The third corresponding relationship of punctuate;
According to the first corresponding relationship, the second corresponding relationship and the third corresponding relationship, determine in second image It is marked in pixel corresponding with the coordinate points of first standard three-dimensional faceform and the first image with described second The corresponding pixel of the coordinate points of quasi- three-dimensional face model, wherein first corresponding relationship is pixel in the first image The corresponding relationship of point and the coordinate points of first standard three-dimensional faceform, second corresponding relationship are second image The corresponding relationship of the coordinate points of middle pixel and second standard three-dimensional faceform;
Based on identified first corresponding relationship, second corresponding relationship and identified pixel, by institute The face characteristic for stating the face characteristic and second image of the first image swaps, and obtains third image and the 4th image.
In one embodiment, it is described based on identified first corresponding relationship, second corresponding relationship with And identified pixel, the face characteristic of the first image and the face characteristic of second image are swapped, obtained The step of to third image and four images, comprising:
According to first corresponding relationship, the pixel value of the coordinate points of first standard three-dimensional faceform is determined;
According to second corresponding relationship, the pixel value of the coordinate points of second standard three-dimensional faceform is determined;
Pixel value based on the coordinate points of second standard three-dimensional faceform is in identified the first image Pixel assignment, obtain third image;
Pixel value based on the coordinate points of first standard three-dimensional faceform is in identified second image Pixel assignment, obtain the 4th image.
In one embodiment, the method also includes:
Determine the mouth of face in the first ratio shared by the mouth of face in the third image and the 4th image Second ratio shared by portion;
Judge whether first ratio and second ratio are greater than preset ratio respectively;
The target image is handled according to default processing mode for the target image that judging result is yes, In, the target image is the third image and/or the 4th image.
In one embodiment, described according to default processing mode, the step of processing the target image, packet It includes:
According to the mouth feature point in the target image, the opening amplitude of mouth in the target image is calculated;
Judge whether the opening amplitude is greater than predetermined amplitude;
If so, carrying out tooth completion processing to the mouth in the target image.
In one embodiment, the method also includes:
Graph cut processing is carried out to the face part in the third image and the 4th image.
In one embodiment, described image feature further includes angle character and/or brightness.
The second aspect of the embodiment of the present invention, provides the face switch in a kind of image, and described device includes:
Image collection module, for obtaining the first image and the second image, wherein the first image and second figure As including face;
Obtaining three-dimensional model module, for the deep neural network model by being obtained ahead of time to the first image and institute It states the second image to be handled, obtains the first image and the corresponding first standard three-dimensional face mould of second image Type and the second standard three-dimensional faceform;Wherein, the deep neural network model is that model training module is based on obtaining in advance Facial image sample training obtain, the deep neural network model includes characteristics of image and standard three-dimensional faceform Corresponding relationship, described image feature include face characteristic and expressive features;
Face Switching Module, for according to pixel in the first image and first standard three-dimensional faceform The coordinate points of pixel and second standard three-dimensional faceform in the corresponding relationship of coordinate points and second image The face characteristic of the first image and the face characteristic of second image are swapped, obtain third figure by corresponding relationship Picture and the 4th image.
In one embodiment, the model training module includes:
Model acquisition submodule, for obtaining predetermined depth neural network model;
Sample acquisition submodule, for obtaining multiple facial image samples and corresponding standard three-dimensional faceform, In, the standard faces model includes expressive features;
Sample labeling submodule, for marking the characteristics of image in the multiple facial image sample, wherein described image Feature includes face characteristic and expressive features;
Model training submodule, for after marking facial image sample and corresponding standard three-dimensional faceform it is defeated Enter the predetermined depth neural network model, the predetermined depth neural network model is trained;
Model obtains submodule, and the accuracy for the output result when the predetermined depth neural network model reaches pre- If value or the training the number of iterations of the facial image sample reach preset times, deconditioning obtains the depth nerve Network model.
In one embodiment, the face Switching Module includes:
Corresponding relationship determines submodule, for determining the coordinate points and described second of first standard three-dimensional faceform The third corresponding relationship of coordinate points in standard three-dimensional faceform;
Pixel determines submodule, for closing according to the first corresponding relationship, the second corresponding relationship and the third are corresponding System determines in second image pixel corresponding with the coordinate points of first standard three-dimensional faceform and described Pixel corresponding with the coordinate points of second standard three-dimensional faceform in first image, wherein the described first corresponding pass System is the corresponding relationship of the coordinate points of pixel and first standard three-dimensional faceform in the first image, described second Corresponding relationship is the corresponding relationship of the coordinate points of pixel and second standard three-dimensional faceform in second image;
Pixel assignment submodule, for based on identified first corresponding relationship, second corresponding relationship with And identified pixel, the face characteristic of the first image and the face characteristic of second image are swapped, obtained To third image and the 4th image.
In one embodiment, the pixel assignment submodule includes:
First pixel determination unit, for determining the first standard three-dimensional face according to first corresponding relationship The pixel value of the coordinate points of model;
Second pixel determination unit, for determining the second standard three-dimensional face according to second corresponding relationship The pixel value of the coordinate points of model;
First pixel assignment unit, the pixel value for the coordinate points based on second standard three-dimensional faceform are Pixel assignment in identified the first image, obtains third image;
Second pixel assignment unit, the pixel value for the coordinate points based on first standard three-dimensional faceform are Pixel assignment in identified second image, obtains the 4th image.
In one embodiment, described device further include:
Ratio-dependent module, for determining in the third image the first ratio shared by the mouth of face and described Second ratio shared by the mouth of face in 4th image;
Ratio judgment module, for judging whether first ratio and second ratio are greater than preset ratio respectively;
Processing module, the target image for being yes for judging result, according to default processing mode, to the target figure As being handled, wherein the target image is the third image and/or the 4th image.
In one embodiment, the processing module includes:
Angular amplitude determination unit, for calculating the target image according to the mouth feature point in the target image The opening amplitude of middle mouth;
Angular amplitude judging unit, for judging whether the opening amplitude is greater than predetermined amplitude;
Completion processing unit, for being carried out to the mouth in the target image when the amplitude of opening is greater than predetermined amplitude Tooth completion processing.
In one embodiment, described device further include:
Fusion treatment module, for carrying out graph cut to the face part in the third image and the 4th image Processing.
In one embodiment, described image feature further includes angle character and/or brightness.
The third aspect of the embodiment of the present invention provides a kind of electronic equipment, including processor, communication interface, memory And communication bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes the face in any of the above-described image Exchange method.
The fourth aspect of the embodiment of the present invention, provides a kind of computer readable storage medium, described computer-readable to deposit Computer program is stored in storage media, the computer program is realized when being executed by processor in any of the above-described image Face exchange method.
5th aspect of the embodiment of the present invention, provides a kind of application product, the application product is for transporting The face exchange method in any of the above-described image is executed when row.
In scheme provided by the embodiment of the present invention, electronic equipment the first image available first and the second image, In, the first image and the second image include face, then by the deep neural network model that is obtained ahead of time to the first image and Second image is handled, and the first image and corresponding first standard three-dimensional faceform of the second image and the second mark are obtained Quasi- three-dimensional face model, further according to the first image and the first standard three-dimensional faceform corresponding relationship and the second image with The corresponding relationship of second standard three-dimensional faceform hands over the face characteristic of the first image and the face characteristic of the second image It changes, obtains third image and the 4th image, since the first standard three-dimensional faceform and the second standard three-dimensional faceform are logical Deep neural network acquisition is crossed, and deep neural network model includes that characteristics of image is corresponding with standard three-dimensional faceform Relationship, characteristics of image includes face characteristic and expressive features, so obtaining the first standard three-dimensional faceform and the second standard three It is accurate to tie up faceform, and including expressive features, so that face in finally obtained third image and the 4th image is accurately simultaneously And it is more natural.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 exchanges the flow chart of method for the face in a kind of image provided by the embodiment of the present invention;
Fig. 2 is the specific flow chart of deep neural network model training method in implementing shown in Fig. 1;
Fig. 3 is the specific flow chart of step S103 in implementing shown in Fig. 1;
Fig. 4 is the specific flow chart of step S303 in implementing shown in Fig. 3;
Fig. 5 is a kind of flow chart of default processing mode based on embodiment illustrated in fig. 1;
Fig. 6 is the specific flow chart of step S503 in implementing shown in Fig. 5;
Fig. 7 is the structural schematic diagram of the face switch in a kind of image provided by the embodiment of the present invention;
Fig. 8 is the structural schematic diagram of a kind of electronic equipment provided by the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
In order to keep face exchange effect more natural, accurate, the embodiment of the invention provides the face friendships in a kind of image Change method, apparatus, electronic equipment and computer readable storage medium.
A kind of face exchange method being provided for the embodiments of the invention in image first below is introduced.
Face exchange method in a kind of image provided by the embodiment of the present invention can be applied to any need to carry out people The electronic equipment of face exchange, hereinafter referred to as electronic equipment.For example, can be for mobile phone, computer, tablet computer, processor etc., herein It is not specifically limited.
As shown in Figure 1, the face in a kind of image exchanges method, which comprises
S101 obtains the first image and the second image;
Wherein, the first image and second image include face.
S102, by the deep neural network model that is obtained ahead of time to the first image and second image at Reason, obtains the first image and corresponding first standard three-dimensional faceform of second image and the second standard three-dimensional Faceform;
Wherein, the deep neural network model is to be obtained based on the facial image sample training obtained in advance, described Deep neural network model includes the corresponding relationship of characteristics of image and standard three-dimensional faceform, and described image feature includes face Feature and expressive features.
S103, it is corresponding with the coordinate points of first standard three-dimensional faceform according to pixel in the first image The corresponding relationship of the coordinate points of pixel and second standard three-dimensional faceform in relationship and second image, will The face characteristic of the first image and the face characteristic of second image swap, and obtain third image and the 4th figure Picture.
As it can be seen that in scheme provided by the embodiment of the present invention, electronic equipment the first image available first and the second figure Picture, wherein the first image and the second image include face, then by the deep neural network model that is obtained ahead of time to the first figure Picture and the second image are handled, and the first image and corresponding first standard three-dimensional faceform of the second image and are obtained Two standard three-dimensional faceforms, further according to the corresponding relationship and the second figure of the first image and the first standard three-dimensional faceform As and the second standard three-dimensional faceform corresponding relationship, by the face characteristic of the face characteristic of the first image and the second image into Row exchange, obtains third image and the 4th image, due to the first standard three-dimensional faceform and the second standard three-dimensional faceform It is to be obtained by deep neural network, and deep neural network model includes characteristics of image and standard three-dimensional faceform Corresponding relationship, characteristics of image includes face characteristic and expressive features, so obtaining the first standard three-dimensional faceform and the second mark Quasi- three-dimensional face model is accurate, and including expressive features, so that the face in finally obtained third image and the 4th image is quasi- It is really and more natural.
In above-mentioned steps S101, available first image of electronic equipment and the second image, it is to be understood that first Image and the second image are the image for needing to carry out face exchange, and being in the first image and the second image includes face.It should First image and the second image can be the image that electronic equipment is locally stored, or other electricity that electronic equipment receives The image that sub- equipment is sent, this is all reasonable.
In one embodiment, electronic equipment can according to user issue face exchange instruction obtain the first image and Second image.When needing to carry out face exchange, user can choose the image stored in electronic equipment, then electronic equipment is just Face exchange instruction can be obtained, and then determines that selected image is the first image and the second image.Certain electronic equipment Can show image name input frame, user when needing to carry out face exchange, can in the input frame input picture title, In this way, electronic equipment can obtain face exchange instruction, and then determine that the corresponding image of image name is the first image and the Two images.
After obtaining above-mentioned first image and the second image, electronic equipment can execute above-mentioned steps S102, by obtaining in advance The deep neural network model obtained handles the first image and the second image, obtains the first image and the second image is right respectively The first standard three-dimensional faceform and the second standard three-dimensional faceform answered.
In one embodiment, electronic equipment can carry out Face datection to the first image and the second image, to obtain The first image and the corresponding facial image of the second image are obtained, then facial image is inputted to the deep neural network being obtained ahead of time Model, and then obtain the first image and corresponding first standard three-dimensional faceform of the second image and the second standard three-dimensional people Face model.
Wherein, which is to be obtained based on the facial image sample training obtained in advance, the depth Neural network model includes the corresponding relationship of characteristics of image and standard three-dimensional faceform, and characteristics of image may include face characteristic And expressive features.Face characteristic is the textural characteristics of face, shape feature etc., and expressive features are the spy of the expression of face Sign, for example, laughing at, crying, indignation etc..
For the structure of the deep neural network model, for example, having this hair such as quantity, the quantity of convolutional layer of neuron Bright embodiment is not specifically limited herein, as long as can obtain by training can include characteristics of image and standard three-dimensional face The deep neural network model of the corresponding relationship of model.
Since the deep neural network model includes the corresponding relationship of expressive features and standard three-dimensional faceform, so its The first standard three-dimensional faceform and the second standard three-dimensional faceform of output include expressive features.In order to which scheme understands And layout is clear, it is subsequent will the training method to above-mentioned deep neural network model carry out citing introduction.
Next, electronic equipment can be according to pixel in the first image and the first standard three in above-mentioned steps S103 Tie up the coordinate of pixel and the second standard three-dimensional faceform in the corresponding relationship and the second image of the coordinate points of faceform Point corresponding relationship, the face characteristic of the first image and the face characteristic of the second image are swapped, obtain third image and 4th image.
Due to the coordinate points in standard three-dimensional faceform be it is determining, institute so as to determine the first image in pixel Point and pixel and the second standard three-dimensional in the corresponding relationship and the second image of the coordinate points of the first standard three-dimensional faceform The corresponding relationship of the coordinate points of faceform passes through the first standard three-dimensional faceform and the second standard three-dimensional face mould in turn Type swaps the face characteristic of the first image and the face characteristic of the second image as association, realization, obtains third image And the 4th image, third image and the 4th image are the image arrived after face exchange.
As a kind of embodiment of the embodiment of the present invention, as shown in Fig. 2, the training side of above-mentioned deep neural network model Formula may include:
S201 obtains predetermined depth neural network model;
Firstly, the available predetermined depth neural network model of electronic equipment, the predetermined depth neural network model just Beginning parameter can be set at random.
S202 obtains multiple facial image samples and corresponding standard three-dimensional faceform;
In order to obtain the training sample for training above-mentioned predetermined depth neural network model, electronic equipment is available more A facial image sample and corresponding standard three-dimensional faceform.In order to enable the deep neural network model that training obtains can With export include expressive features standard three-dimensional faceform, so electronic equipment obtain standard three-dimensional faceform be include with Its corresponding corresponding expressive features of facial image sample.
Electronic equipment can determine the corresponding standard three-dimensional face of facial image sample using any way in the related technology Model is not specifically limited herein.
S203 marks the characteristics of image in the multiple facial image sample;
In order to which during training predetermined depth neural network model, predetermined depth neural network model be may learn The corresponding relationship of face characteristic and expressive features and standard three-dimensional faceform, electronic equipment can mark above-mentioned each face figure Characteristics of image in decent, wherein characteristics of image includes face characteristic and expressive features.
Electronic equipment can determine the characteristics of image in facial image sample by technologies such as recognition of face, Face datections, And it is marked.
S204, by the facial image sample and the corresponding standard three-dimensional faceform input predetermined depth after label Neural network model is trained the predetermined depth neural network model;
After marking the characteristics of image in above-mentioned multiple facial image samples, electronic equipment can be by the face after label Image pattern and corresponding standard three-dimensional faceform input in the predetermined depth neural network model, to predetermined depth mind It is trained through network model.
In this way, during being trained to predetermined depth neural network model, the predetermined depth neural network model The corresponding relationship of characteristics of image and standard three-dimensional faceform can constantly be learnt, constantly adjust predetermined depth neural network model Parameter, in turn, predetermined depth neural network model gradually builds up pair of accurately characteristics of image and standard three-dimensional faceform It should be related to.
For training the concrete mode embodiment of the present invention of the predetermined depth neural network model to be not specifically limited herein, It can be using related arbitrary model training method, for example, can be using modes such as gradient descent algorithms.
S205, when the accuracy of the output result of the predetermined depth neural network model reaches preset value or the people When the training the number of iterations of face image sample reaches preset times, deconditioning obtains the deep neural network model.
During training predetermined depth neural network model, its parameter, predetermined depth neural network mould are constantly adjusted The output result of type can be more and more accurate.In one embodiment, when the output result of predetermined depth neural network model is quasi- When exactness reaches preset value, illustrate that predetermined depth neural network model at this time has been able to correspond to arbitrary facial image, it is defeated Accurate standard three-dimensional faceform out, then can deconditioning.
Wherein, above-mentioned default accuracy can according in actual scene for the accurate of obtained standard three-dimensional faceform Degree setting is not specifically limited herein for example, can be 90%, 95%, 98% etc..
In another embodiment, during training predetermined depth neural network model, facial image sample quilt Predetermined depth neural network model is constantly inputted, one facial image sample of every input can be referred to as an iteration.That When the training the number of iterations of facial image sample reaches preset times, illustrate to have had trained a large amount of facial image at this time Sample, predetermined depth neural network model at this time have been able to correspond to arbitrary facial image, export accurate standard Three-dimensional face model, also can deconditioning.
As it can be seen that in the present embodiment, electronic equipment can use the multiple facial image samples and corresponding mark of acquisition Quasi- three-dimensional face model is trained predetermined depth neural network model, and in the output knot of predetermined depth neural network model When the training the number of iterations that the accuracy of fruit reaches preset value or facial image sample reaches preset times, deconditioning is obtained Deep neural network model.The accurately depth nerve including expressive features can be exported by the way that above-mentioned training method is available Network model.
As a kind of embodiment of the embodiment of the present invention, as shown in figure 3, above-mentioned according to pixel in the first image With pixel in the corresponding relationship and second image of the coordinate points of first standard three-dimensional faceform and described the The corresponding relationship of the coordinate points of two standard three-dimensional faceforms, by the face characteristic of the first image and second image The step of face characteristic swaps, and obtains third image and four images, comprising:
S301 is determined in the coordinate points and second standard three-dimensional faceform of first standard three-dimensional faceform Coordinate points third corresponding relationship;
Since the first standard three-dimensional faceform and the second standard three-dimensional faceform are standard three-dimensional faceform, In coordinate points position be it is determining, therefore, obtain the first standard three-dimensional faceform and the second standard three-dimensional face mould After type, electronic equipment can determine the corresponding relationship between the coordinate points of the two, for the convenience of description, subsequent abbreviation third pair It should be related to.
S302 determines second figure according to the first corresponding relationship, the second corresponding relationship and the third corresponding relationship With described the in pixel corresponding with the coordinate points of first standard three-dimensional faceform and the first image as in The corresponding pixel of the coordinate points of two standard three-dimensional faceforms;
For the convenience of description, subsequent pair by the coordinate points of pixel and the first standard three-dimensional faceform in the first image It should be related to referred to as the first corresponding relationship, pixel in the second image is corresponding with the coordinate points of the second standard three-dimensional faceform Relationship is known as the second corresponding relationship.
Then it is determined that electronic equipment can be according to the first corresponding relationship, the second corresponding relationship after above-mentioned third corresponding relationship And third corresponding relationship, determine pixel corresponding with the coordinate points of the first standard three-dimensional faceform in the second image.
Due between the coordinate points of the first standard three-dimensional faceform and the coordinate points of the second standard three-dimensional faceform Corresponding relationship in corresponding relationship and the second image between pixel and the coordinate points of the second standard three-dimensional faceform is It determines, then can be determined by coordinate conversion corresponding with the coordinate points of the first standard three-dimensional faceform in the second image Pixel.
It is understood that picture corresponding with the coordinate points of the first standard three-dimensional faceform in identified second image Vegetarian refreshments is pixel corresponding with the face part in the first image in the second image, that is, carries out operation of changing face Pixel.
Similarly, due to the coordinate points of the coordinate points of the first standard three-dimensional faceform and the second standard three-dimensional faceform Between corresponding relationship and the first image between pixel and the coordinate points of the first standard three-dimensional faceform it is corresponding pass System is it has been determined that so can determine the coordinate points in the first image with the second standard three-dimensional faceform by coordinate conversion Corresponding pixel.
Likewise, pixel corresponding with the coordinate points of the second standard three-dimensional faceform is in identified first image For pixel corresponding with the face part in the second image in the first image, that is, the pixel for the operation that change face Point.
S303, based on identified first corresponding relationship, second corresponding relationship and identified pixel, The face characteristic of the first image and the face characteristic of second image are swapped, third image and the 4th figure are obtained Picture.
Electronic equipment next can be based on identified first corresponding relationship, the second corresponding relationship and identified Pixel swaps the face characteristic of the first image and the face characteristic of the second image, and then obtains third image and Four images.
Carry out changing face in first image and the second image operation pixel it has been determined that so can to really Fixed pixel carries out assignment, and then third image and the 4th image after being changed face.
As it can be seen that in the present embodiment, electronic equipment can be according to the first corresponding relationship, the second corresponding relationship and determination Third corresponding relationship determines pixel corresponding with the coordinate points of the first standard three-dimensional faceform, Yi Ji in the second image Pixel corresponding with the coordinate points of the second standard three-dimensional faceform in one image, so by the face characteristic of the first image with The face characteristic of second image swaps, and obtains third image and the 4th image, after the changing face of available accurate and natural Image.
As a kind of embodiment of the embodiment of the present invention, as shown in figure 4, above-mentioned corresponding based on identified described first Relationship, second corresponding relationship and identified pixel, by the face characteristic of the first image and second figure The step of face characteristic of picture swaps, and obtains third image and four images may include:
S401 determines the pixel of the coordinate points of first standard three-dimensional faceform according to first corresponding relationship Value;
S402 determines the pixel of the coordinate points of second standard three-dimensional faceform according to second corresponding relationship Value;
Since the corresponding relationship of the coordinate points of pixel and the first standard three-dimensional faceform in the first image is it has been determined that The pixel value of pixel, can be for the first standard three-dimensional face mould it is known that so according to the mapping relations of the two in one image The coordinate points of type carry out assignment.
Likewise, due to the coordinate points of pixel and the second standard three-dimensional faceform in the second image corresponding relationship It determines, the pixel value of pixel, can be for the second standard three-dimensional it is known that so according to the mapping relations of the two in the second image The coordinate points of faceform carry out assignment.
It should be noted that the execution sequence of above-mentioned steps S401 and step S402 can first carry out step there is no limiting Rapid S401 can also first carry out step S402, may also be performed simultaneously step S401 and step S402, this is all reasonable.
S403, the pixel value based on the coordinate points of second standard three-dimensional faceform are identified first figure Pixel assignment as in, obtains third image;
S404, the pixel value based on the coordinate points of first standard three-dimensional faceform are identified second figure Pixel assignment as in, obtains the 4th image.
The pixel value of the coordinate points of the first standard three-dimensional faceform and the second standard three-dimensional faceform is determined, electricity Sub- equipment can be for the pixel assignment in the pixel and the second image in identified first image.
It is the pixel in identified first image in the pixel value of the coordinate points according to the first standard three-dimensional faceform When point assignment, when being converted to two-dimensional coordinate point due to three-dimensional coordinate point, it may appear that multiple coordinate points correspond to same pixel Situation, that is, the situation that coordinate points are capped, then the depth information that can be characterized at this time according to three-dimensional coordinate point, determines quilt The coordinate points of covering, and then the pixel value of the pixel in identified first image is determined as uncovered coordinate points pair The pixel value answered obtains third image.
For example, the coordinate points (20,26,32) of the first standard three-dimensional faceform and (20,26,45) corresponding first image In pixel be same, i.e. pixel (20,26), then the depth information that can be characterized according to three-dimensional coordinate point determines Capped coordinate points, that is, capped coordinate points are determined according to depth information 32 and 45, due to the depth letter of 45 characterizations Breath is bigger than the depth information of 32 characterizations, and the big coordinate points of depth information (20,26,45) can be capped in two-dimension human face image, Can so the pixel value of the coordinate points (20,26,32) of the first standard three-dimensional faceform be assigned to corresponding first image In pixel.
Likewise, the picture of the coordinate points of the second standard three-dimensional faceform and the second standard three-dimensional faceform has been determined Element value, electronic equipment can obtain the 4th image with for the pixel assignment in the pixel in identified second image.Tool Body mode with it is above-mentioned be that the mode of pixel assignment in the first image is identical, details are not described herein.
It is pixel since the first standard three-dimensional faceform and the second standard three-dimensional faceform include expressive features The third image and the 4th image obtained after point assignment is the image for including expressive features, and it is more natural that face exchanges effect.
It should be noted that the execution sequence of above-mentioned steps S403 and step S404 can first carry out step there is no limiting Rapid S403 can also first carry out step S404, may also be performed simultaneously step S403 and step S404, this is all reasonable.
As it can be seen that in the present embodiment, electronic equipment can determine the pixel of the coordinate points of the first standard three-dimensional faceform The pixel value of the coordinate points of value and the second standard three-dimensional faceform, and then be the pixel in identified first image Assignment obtains third image, is the pixel assignment in identified second image, the 4th image is obtained, due to the first standard Three-dimensional face model and the second standard three-dimensional faceform include expressive features, therefore obtained third image and the 4th image are equal It is the image for including expressive features, face exchanges effect more naturally, user experience is good.
Effect is exchanged in order to further enhance face, as a kind of embodiment of the embodiment of the present invention, as shown in figure 5, The above method can also include:
S501 determines in the third image people in the first ratio shared by the mouth of face and the 4th image Second ratio shared by the mouth of face;
Since proportion is different in the picture for the mouth portion of face, the effect of face exchange can be different, institute The mouth of face in the first ratio and the 4th image shared by the mouth of face in third image can be determined with electronic equipment The second shared ratio.
Electronic equipment can arbitrarily can determine the side of the mouth of face proportion in the picture by Face datection etc. Formula determines above-mentioned first ratio and the second ratio, is not specifically limited herein.
S502, judges whether first ratio and second ratio are greater than preset ratio respectively;
After determining above-mentioned first ratio and the second ratio, electronic equipment can judge the first ratio and the second ratio respectively Whether preset ratio is greater than.Wherein, the preset ratio can based on experience value and the really degree of required effect of changing face etc. because Element determines, for example, can be 50%, 40%, 35% etc., be not specifically limited herein.
S503, for the target image that judging result is yes, according to default processing mode, at the target image Reason.
When the judgment result is yes, illustrate that the mouth proportion of the face in the target image is larger, then face The details of mouth is relatively clear, and the requirement for face exchange effect is also just higher.It is understood that target image is For third image and/or the 4th image.
The first situation, a corresponding judging result in third image and the 4th image is yes, then illustrating the mesh The mouth proportion of face is larger in logo image, then can to the target image according to default processing mode at Reason, to improve face exchange effect.
Second situation, third image and the corresponding judging result of the 4th image be, then illustrate third image and The mouth proportion of face is larger in 4th image, then can be to third image and the 4th image according to default place Reason mode is handled, to improve face exchange effect.
The third situation, third image and the corresponding judging result of the 4th image are no, then illustrate third image and The mouth proportion of face is smaller in 4th image, then can be without any processing.
As it can be seen that in the present embodiment, electronic equipment can the biggish target image of mouth proportion to face carry out Default processing can make face exchange effect more preferable, treatment of details more true nature.
As a kind of embodiment of the embodiment of the present invention, as shown in fig. 6, it is above-mentioned according to default processing mode, to described The step of target image is handled may include:
S601 calculates the opening amplitude of mouth in the target image according to the mouth feature point in the target image;
Since when carrying out face exchange, tooth will not generally be exchanged as face characteristic, therefore in the mouth of face Portion in the biggish situation of proportion, is likely to the problem of absence of tooth occur after carrying out face exchange in the target image, that Electronic equipment can calculate the opening amplitude of mouth in target image according to the mouth feature point in target image.
Electronic equipment can determine the mouth feature point in target image by feature point detection algorithm, so according to really Fixed mouth feature point, can determine the opening amplitude of mouth in target image.
Wherein, opening amplitude, that is, lip of mouth opens the amplitude formed up and down.It is understood that the opening width of mouth Degree is bigger, and the size for illustrating that two panels lip opens up and down is bigger, then the tooth exposed should be more;The opening amplitude of mouth is got over Small, the size for illustrating that two panels lip opens up and down is smaller, then the tooth exposed should be fewer.
S602, judges whether the opening amplitude is greater than predetermined amplitude, if so, thening follow the steps S603;If it is not, then Without processing.
After the opening amplitude for determining mouth in target image, in order to determine the need for carrying out tooth completion processing, electronics Equipment may determine that whether opening amplitude is greater than predetermined amplitude, wherein the predetermined amplitude can determine based on experience value.For example, If general mouth, which opens 50 degree, obviously to show one's teeth, predetermined amplitude can be set as 50 degree.
If that mouth, which opens amplitude, is greater than predetermined amplitude, illustrate that the mouth of personage in target image at this time opens amplitude It is larger, it can actually expose more tooth, then step S603 can be executed in order to keep face truer.
If that mouth, which opens amplitude, is not more than predetermined amplitude, illustrate that the mouth of personage in target image at this time opens width Degree is smaller, and the tooth that can actually expose is less or cannot show one's teeth, then can be without processing.
S603 carries out tooth completion processing to the mouth in the target image.
If that mouth, which opens amplitude, is greater than predetermined amplitude, illustrate that the mouth of personage in target image at this time opens amplitude It is larger, it can actually expose more tooth, then electronic equipment can carry out tooth benefit to the mouth in target image Full processing, so that the face more true nature in target image.
For carrying out the concrete mode of tooth completion processing to the mouth in target image, can be added using dental imaging Etc. modes, as long as the tooth completion of the mouth in target image can be not specifically limited herein and be illustrated.
As it can be seen that in the present embodiment, electronic equipment can calculate target image according to the mouth feature point in target image The opening amplitude of middle mouth, judges whether opening amplitude is greater than predetermined amplitude, if so, carrying out tooth to the mouth in target image Tooth completion processing.In this way, the image more true nature after face can be made to exchange, it is more preferable that face exchanges effect.
As a kind of embodiment of the embodiment of the present invention, the above method can also include:
Graph cut processing is carried out to the face part in the third image and the 4th image.
Due to carry out face exchange the first image and face in the second image the colour of skin may differ by it is larger, in this way, into The problem of colour of skin unevenness will occur in the third image and the 4th image that pedestrian's face exchanges, influence the true effect of image.
For example, the colour of skin of face is more black in the first image, and the colour of skin of face is whiter in the second image, then exchange face The colour of skin for obtaining to occur in third image the face such as eyes, nose afterwards is very white, and positions such as forehead, chin or original The more black colour of skin, the 4th image also will appear similar problem.
In order to avoid happening in this way, electronic equipment can be carried out the face part in third image and the 4th image Graph cut processing, in this way can be so that the colour of skin of third image and the face part in the 4th image be uniform, more naturally very It is real.
As it can be seen that in the present embodiment, after obtaining third image and the 4th image, electronic equipment can to third image and Face part in 4th image carries out graph cut processing, in this way, can make face part in third image and the 4th image The colour of skin it is uniform, avoid the occurrence of the problem of colour of skin unevenness and influence face exchange effect.
In order to further increase face exchange effect, the image more true nature for obtaining exchange, as of the invention real A kind of embodiment of example is applied, above-mentioned characteristics of image can also include angle character and/or brightness etc..
Since face is likely to be at a variety of different angles in the picture, such as head is faced upward, torticollis, turns left, turn right, These angle characters will affect face exchange effect, therefore, in the above-mentioned predetermined depth neural network model of training, facial image Sample may include the facial image of various different angles, and in this way in the training predetermined depth neural network model, this is default Deep neural network model can learn to corresponding pass of the characteristics of image with three-dimensional standard faces model for including angle character System.In turn, obtain to export the deep neural network model of the three-dimensional standard faces model including angle character.
Likewise, different illumination will lead to brightness of image difference, the state of face in image also will affect, therefore instructing When practicing above-mentioned predetermined depth neural network model, facial image sample may include the facial image of various different brightness, in this way In the training predetermined depth neural network model, which can learn to including brightness Characteristics of image and three-dimensional standard faces model corresponding relationship.In turn, obtain to export the three-dimensional mark including brightness The deep neural network model of quasi- faceform.
Certainly, above-mentioned characteristics of image can also include the feature that can arbitrarily influence face state in the picture, face figure Decent may include various different characteristics facial image, with training obtain that the three-dimensional standard including various features can be exported Faceform.
As it can be seen that in the present embodiment, above-mentioned characteristics of image can also include angle character and/or brightness etc., with instruction The three-dimensional standard faces model including various features can be exported by getting, and further increased face exchange effect, made to exchange The image arrived more true nature.
Method is exchanged corresponding to the face in above-mentioned image, the embodiment of the invention also provides the face friendships in a kind of image Changing device.
The face switch being provided for the embodiments of the invention in a kind of image below is introduced.
As shown in fig. 7, the face switch in a kind of image, described device include:
Image collection module 710, for obtaining the first image and the second image;
Wherein, the first image and second image include face.
Obtaining three-dimensional model module 720, for the deep neural network model by being obtained ahead of time to the first image It is handled with second image, obtains the first image and the corresponding first standard three-dimensional people of second image Face model and the second standard three-dimensional faceform;
Wherein, the deep neural network model is model training module (being not shown in Fig. 7) based on the people obtained in advance Face image sample training obtains, and the deep neural network model includes that characteristics of image is corresponding with standard three-dimensional faceform Relationship, described image feature include face characteristic and expressive features.
Face Switching Module 730, for according to pixel in the first image and the first standard three-dimensional face mould The coordinate of pixel and second standard three-dimensional faceform in the corresponding relationship of the coordinate points of type and second image The corresponding relationship of point, the face characteristic of the first image and the face characteristic of second image are swapped, and obtain the Three images and the 4th image.
As it can be seen that in scheme provided by the embodiment of the present invention, electronic equipment the first image available first and the second figure Picture, wherein the first image and the second image include face, then by the deep neural network model that is obtained ahead of time to the first figure Picture and the second image are handled, and the first image and corresponding first standard three-dimensional faceform of the second image and are obtained Two standard three-dimensional faceforms, further according to the corresponding relationship and the second figure of the first image and the first standard three-dimensional faceform As and the second standard three-dimensional faceform corresponding relationship, by the face characteristic of the face characteristic of the first image and the second image into Row exchange, obtains third image and the 4th image, due to the first standard three-dimensional faceform and the second standard three-dimensional faceform It is to be obtained by deep neural network, and deep neural network model includes characteristics of image and standard three-dimensional faceform Corresponding relationship, characteristics of image includes face characteristic and expressive features, so obtaining the first standard three-dimensional faceform and the second mark Quasi- three-dimensional face model is accurate, and including expressive features, so that the face in finally obtained third image and the 4th image is quasi- It is really and more natural.
As a kind of embodiment of the embodiment of the present invention, above-mentioned model training module may include:
Model acquisition submodule (is not shown) in Fig. 7, for obtaining predetermined depth neural network model;
Sample acquisition submodule (is not shown) in Fig. 7, for obtaining multiple facial image samples and corresponding standard three Tie up faceform, wherein the standard faces model includes expressive features;
Sample labeling submodule (is not shown) in Fig. 7, for marking the characteristics of image in the multiple facial image sample, Wherein, described image feature includes face characteristic and expressive features;
Model training submodule (is not shown) in Fig. 7, for after marking facial image sample and corresponding standard Three-dimensional face model inputs the predetermined depth neural network model, is trained to the predetermined depth neural network model;
Model obtains submodule (being not shown in Fig. 7), for working as the output result of the predetermined depth neural network model Accuracy reach preset value or when the number of iterations reaches preset times, deconditioning obtains the deep neural network mould Type.
As a kind of embodiment of the embodiment of the present invention, above-mentioned face Switching Module 730 may include:
Corresponding relationship determines submodule (being not shown in Fig. 7), for determining the seat of first standard three-dimensional faceform The third corresponding relationship of coordinate points in punctuate and second standard three-dimensional faceform;
Pixel determines submodule (being not shown in Fig. 7), for according to the first corresponding relationship, the second corresponding relationship and institute Third corresponding relationship is stated, determines pixel corresponding with the coordinate points of first standard three-dimensional faceform in second image Pixel corresponding with the coordinate points of second standard three-dimensional faceform in point and the first image, wherein described First corresponding relationship, which is that pixel is corresponding with the coordinate points of first standard three-dimensional faceform in the first image, to close System, second corresponding relationship are the coordinate points of pixel and second standard three-dimensional faceform in second image Corresponding relationship;
Pixel assignment submodule (being not shown in Fig. 7), for based on identified first corresponding relationship, described the Two corresponding relationships and identified pixel, by the face characteristic of the face characteristic of the first image and second image It swaps, obtains third image and the 4th image.
As a kind of embodiment of the embodiment of the present invention, above-mentioned pixel assignment submodule may include:
First pixel determination unit (being not shown in Fig. 7), for determining described first according to first corresponding relationship The pixel value of the coordinate points of standard three-dimensional faceform;
Second pixel determination unit (being not shown in Fig. 7), for determining described second according to second corresponding relationship The pixel value of the coordinate points of standard three-dimensional faceform;
First pixel assignment unit (being not shown in Fig. 7), for the seat based on second standard three-dimensional faceform The pixel value of punctuate is the pixel assignment in identified the first image, obtains third image;
Second pixel assignment unit (being not shown in Fig. 7), for the seat based on first standard three-dimensional faceform The pixel value of punctuate is the pixel assignment in identified second image, obtains the 4th image.
As a kind of embodiment of the embodiment of the present invention, above-mentioned apparatus can also include:
Ratio-dependent module (is not shown) in Fig. 7, for determining in the third image first shared by the mouth of face Second ratio shared by the mouth of face in ratio and the 4th image;
Ratio judgment module (is not shown) in Fig. 7, for judging whether are first ratio and second ratio respectively Greater than preset ratio;
Processing module (is not shown) in Fig. 7, the target image for being yes for judging result, according to default processing side Formula handles the target image, wherein the target image is the third image and/or the 4th image.
As a kind of embodiment of the embodiment of the present invention, above-mentioned processing module may include:
Amplitude determination unit (is not shown) in Fig. 7, for according to the mouth feature point in the target image, described in calculating The opening amplitude of mouth in target image;
Amplitude judging unit (is not shown) in Fig. 7, for judging whether the opening amplitude is greater than predetermined amplitude;
Completion processing unit (is not shown) in Fig. 7, is used for when the amplitude of opening is greater than predetermined amplitude, to the target image In mouth carry out tooth completion processing.
As a kind of embodiment of the embodiment of the present invention, above-mentioned apparatus can also include:
Fusion treatment module (is not shown) in Fig. 7, for the face in the third image and the 4th image Divide and carries out graph cut processing.
As a kind of embodiment of the embodiment of the present invention, above-mentioned characteristics of image can also include amplitude characteristic and/or bright Spend feature.
The embodiment of the invention also provides a kind of electronic equipment, as shown in figure 8, electronic equipment may include processor 801, Communication interface 802, memory 803 and communication bus 804, wherein processor 801, communication interface 802, memory 803 pass through logical Letter bus 804 completes mutual communication,
Memory 803, for storing computer program;
Processor 801 when for executing the program stored on memory 803, realizes following steps:
Obtain the first image and the second image;
Wherein, the first image and second image include face.
The first image and second image are handled by the deep neural network model being obtained ahead of time, obtained To the first image and corresponding first standard three-dimensional faceform of second image and the second standard three-dimensional face Model;
Wherein, the deep neural network model is to be obtained based on the facial image sample training obtained in advance, described Deep neural network model includes the corresponding relationship of characteristics of image and standard three-dimensional faceform, and described image feature includes face Feature and expressive features.
According to the corresponding relationship of pixel in the first image and the coordinate points of first standard three-dimensional faceform, And in second image coordinate points of pixel and second standard three-dimensional faceform corresponding relationship, by described the The face characteristic of one image and the face characteristic of second image swap, and obtain third image and the 4th image.
As it can be seen that in scheme provided by the embodiment of the present invention, electronic equipment the first image available first and the second figure Picture, wherein the first image and the second image include face, then by the deep neural network model that is obtained ahead of time to the first figure Picture and the second image are handled, and the first image and corresponding first standard three-dimensional faceform of the second image and are obtained Two standard three-dimensional faceforms, further according to the corresponding relationship and the second figure of the first image and the first standard three-dimensional faceform As and the second standard three-dimensional faceform corresponding relationship, by the face characteristic of the face characteristic of the first image and the second image into Row exchange, obtains third image and the 4th image, due to the first standard three-dimensional faceform and the second standard three-dimensional faceform It is to be obtained by deep neural network, and deep neural network model includes characteristics of image and standard three-dimensional faceform Corresponding relationship, characteristics of image includes face characteristic and expressive features, so obtaining the first standard three-dimensional faceform and the second mark Quasi- three-dimensional face model is accurate, and including expressive features, so that the face in finally obtained third image and the 4th image is quasi- It is really and more natural.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc..For just It is only indicated with a thick line in expression, figure, it is not intended that an only bus or a type of bus.
Communication interface is for the communication between above-mentioned electronic equipment and other equipment.
Memory may include random access memory (Random Access Memory, RAM), also may include non-easy The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal Processing, DSP), it is specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing It is field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete Door or transistor logic, discrete hardware components.
Wherein, the training method of above-mentioned deep neural network model may include:
Obtain predetermined depth neural network model;
Obtain multiple facial image samples and corresponding standard three-dimensional faceform, wherein the standard faces model Including expressive features;
Mark the characteristics of image in the multiple facial image sample, wherein described image feature include face characteristic and Expressive features;
By the facial image sample and the corresponding standard three-dimensional faceform input predetermined depth nerve after label Network model is trained the predetermined depth neural network model;
When the accuracy of the output result of the predetermined depth neural network model reaches preset value or the number of iterations reaches When preset times, deconditioning obtains the deep neural network model.
Wherein, above-mentioned according to pixel in the first image and the coordinate points of first standard three-dimensional faceform Pixel is corresponding with the coordinate points of second standard three-dimensional faceform in corresponding relationship and second image closes System, the face characteristic of the first image and the face characteristic of second image are swapped, and obtain third image and the The step of four images may include:
The coordinate points for determining first standard three-dimensional faceform and the seat in second standard three-dimensional faceform The third corresponding relationship of punctuate;
According to the first corresponding relationship, the second corresponding relationship and the third corresponding relationship, determine in second image It is marked in pixel corresponding with the coordinate points of first standard three-dimensional faceform and the first image with described second The corresponding pixel of the coordinate points of quasi- three-dimensional face model, wherein first corresponding relationship is pixel in the first image The corresponding relationship of point and the coordinate points of first standard three-dimensional faceform, second corresponding relationship are second image The corresponding relationship of the coordinate points of middle pixel and second standard three-dimensional faceform;
Based on identified first corresponding relationship, second corresponding relationship and identified pixel, by institute The face characteristic for stating the face characteristic and second image of the first image swaps, and obtains third image and the 4th image.
Wherein, above-mentioned based on identified first corresponding relationship, second corresponding relationship and identified picture Vegetarian refreshments swaps the face characteristic of the first image and the face characteristic of second image, obtain third image and The step of four images may include:
According to first corresponding relationship, the pixel value of the coordinate points of first standard three-dimensional faceform is determined;
According to second corresponding relationship, the pixel value of the coordinate points of second standard three-dimensional faceform is determined;
Pixel value based on the coordinate points of second standard three-dimensional faceform is in identified the first image Pixel assignment, obtain third image;
Pixel value based on the coordinate points of first standard three-dimensional faceform is in identified second image Pixel assignment, obtain the 4th image.
Wherein, the above method can also include:
Determine the mouth of face in the first ratio shared by the mouth of face in the third image and the 4th image Second ratio shared by portion;
Judge whether first ratio and second ratio are greater than preset ratio respectively;
The target image is handled according to default processing mode for the target image that judging result is yes, In, the target image is the third image and/or the 4th image.
Wherein, above-mentioned to may include: the step of processing the target image according to default processing mode
According to the mouth feature point in the target image, the opening amplitude of mouth in the target image is calculated;
Judge whether the opening amplitude is greater than predetermined amplitude;
If so, carrying out tooth completion processing to the mouth in the target image.
Wherein, the above method can also include:
Graph cut processing is carried out to the face part in the third image and the 4th image.
Wherein, above-mentioned characteristics of image can also include angle character and/or brightness.
The embodiment of the invention also provides a kind of computer readable storage medium, the computer readable storage medium memory Computer program is contained, the computer program performs the steps of when being executed by processor
Obtain the first image and the second image;
Wherein, the first image and second image include face.
The first image and second image are handled by the deep neural network model being obtained ahead of time, obtained To the first image and corresponding first standard three-dimensional faceform of second image and the second standard three-dimensional face Model;
Wherein, the deep neural network model is to be obtained based on the facial image sample training obtained in advance, described Deep neural network model includes the corresponding relationship of characteristics of image and standard three-dimensional faceform, and described image feature includes face Feature and expressive features.
According to the corresponding relationship of pixel in the first image and the coordinate points of first standard three-dimensional faceform, And in second image coordinate points of pixel and second standard three-dimensional faceform corresponding relationship, by described the The face characteristic of one image and the face characteristic of second image swap, and obtain third image and the 4th image.
As it can be seen that in scheme provided by the embodiment of the present invention, it is available first when computer program is executed by processor First image and the second image, wherein the first image and the second image include face, then the depth nerve by being obtained ahead of time Network model handles the first image and the second image, obtains the first image and corresponding first standard of the second image Three-dimensional face model and the second standard three-dimensional faceform, it is corresponding with the first standard three-dimensional faceform further according to the first image The corresponding relationship of relationship and the second image and the second standard three-dimensional faceform, by the face characteristic of the first image and second The face characteristic of image swaps, and third image and the 4th image is obtained, due to the first standard three-dimensional faceform and second Standard three-dimensional faceform is to be obtained by deep neural network, and deep neural network model includes characteristics of image and mark The corresponding relationship of quasi- three-dimensional face model, characteristics of image includes face characteristic and expressive features, so obtaining the first standard three-dimensional Faceform and the second standard three-dimensional faceform are accurate, and including expressive features, so that finally obtained third image and Face in four images is accurate and more natural.
Wherein, the training method of above-mentioned deep neural network model may include:
Obtain predetermined depth neural network model;
Obtain multiple facial image samples and corresponding standard three-dimensional faceform, wherein the standard faces model Including expressive features;
Mark the characteristics of image in the multiple facial image sample, wherein described image feature include face characteristic and Expressive features;
By the facial image sample and the corresponding standard three-dimensional faceform input predetermined depth nerve after label Network model is trained the predetermined depth neural network model;
When the accuracy of the output result of the predetermined depth neural network model reaches preset value or the number of iterations reaches When preset times, deconditioning obtains the deep neural network model.
Wherein, above-mentioned according to pixel in the first image and the coordinate points of first standard three-dimensional faceform Pixel is corresponding with the coordinate points of second standard three-dimensional faceform in corresponding relationship and second image closes System, the face characteristic of the first image and the face characteristic of second image are swapped, and obtain third image and the The step of four images may include:
The coordinate points for determining first standard three-dimensional faceform and the seat in second standard three-dimensional faceform The third corresponding relationship of punctuate;
According to the first corresponding relationship, the second corresponding relationship and the third corresponding relationship, determine in second image It is marked in pixel corresponding with the coordinate points of first standard three-dimensional faceform and the first image with described second The corresponding pixel of the coordinate points of quasi- three-dimensional face model, wherein first corresponding relationship is pixel in the first image The corresponding relationship of point and the coordinate points of first standard three-dimensional faceform, second corresponding relationship are second image The corresponding relationship of the coordinate points of middle pixel and second standard three-dimensional faceform;
Based on identified first corresponding relationship, second corresponding relationship and identified pixel, by institute The face characteristic for stating the face characteristic and second image of the first image swaps, and obtains third image and the 4th image.
Wherein, above-mentioned based on identified first corresponding relationship, second corresponding relationship and identified picture Vegetarian refreshments swaps the face characteristic of the first image and the face characteristic of second image, obtain third image and The step of four images may include:
According to first corresponding relationship, the pixel value of the coordinate points of first standard three-dimensional faceform is determined;
According to second corresponding relationship, the pixel value of the coordinate points of second standard three-dimensional faceform is determined;
Pixel value based on the coordinate points of second standard three-dimensional faceform is in identified the first image Pixel assignment, obtain third image;
Pixel value based on the coordinate points of first standard three-dimensional faceform is in identified second image Pixel assignment, obtain the 4th image.
Wherein, the above method can also include:
Determine the mouth of face in the first ratio shared by the mouth of face in the third image and the 4th image Second ratio shared by portion;
Judge whether first ratio and second ratio are greater than preset ratio respectively;
The target image is handled according to default processing mode for the target image that judging result is yes, In, the target image is the third image and/or the 4th image.
Wherein, above-mentioned to may include: the step of processing the target image according to default processing mode
According to the mouth feature point in the target image, the opening amplitude of mouth in the target image is calculated;
Judge whether the opening amplitude is greater than predetermined amplitude;
If so, carrying out tooth completion processing to the mouth in the target image.
Wherein, the above method can also include:
Graph cut processing is carried out to the face part in the third image and the 4th image.
Wherein, above-mentioned characteristics of image can also include amplitude characteristic and/or brightness.
The embodiment of the present application also provides a kind of application product, the application product for executing at runtime State the face exchange method in embodiment in any image.
It should be noted that real for above-mentioned apparatus, electronic equipment, computer readable storage medium and application product For applying example, since it is substantially similar to the method embodiment, so being described relatively simple, related place is referring to embodiment of the method Part explanation.
Need further exist for explanation, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes that A little elements, but also including other elements that are not explicitly listed, or further include for this process, method, article or The intrinsic element of equipment.In the absence of more restrictions, the element limited by sentence "including a ...", is not arranged Except there is also other identical elements in the process, method, article or apparatus that includes the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (10)

1. the face in a kind of image exchanges method, which is characterized in that the described method includes:
Obtain the first image and the second image, wherein the first image and second image include face;
The first image and second image are handled by the deep neural network model being obtained ahead of time, obtain institute State the first image and corresponding first standard three-dimensional faceform of second image and the second standard three-dimensional faceform; Wherein, the deep neural network model is to be obtained based on the facial image sample training obtained in advance, the depth nerve Network model includes the corresponding relationship of characteristics of image and standard three-dimensional faceform, and described image feature includes face characteristic and table Feelings feature;
According to the corresponding relationship of pixel in the first image and the coordinate points of first standard three-dimensional faceform, and The corresponding relationship of the coordinate points of pixel and second standard three-dimensional faceform in second image, by first figure The face characteristic of picture and the face characteristic of second image swap, and obtain third image and the 4th image.
2. the method as described in claim 1, which is characterized in that the training method of the deep neural network model, comprising:
Obtain predetermined depth neural network model;
Obtain multiple facial image samples and corresponding standard three-dimensional faceform, wherein the standard faces model includes Expressive features;
Mark the characteristics of image in the multiple facial image sample, wherein described image feature includes face characteristic and expression Feature;
By the facial image sample and the corresponding standard three-dimensional faceform input predetermined depth neural network after label Model is trained the predetermined depth neural network model;
When the accuracy of the output result of the predetermined depth neural network model reaches preset value or the facial image sample Training the number of iterations when reaching preset times, deconditioning obtains the deep neural network model.
3. the method as described in claim 1, which is characterized in that described according to pixel in the first image and described first Pixel and second standard three-dimensional in the corresponding relationship of the coordinate points of standard three-dimensional faceform and second image The corresponding relationship of the coordinate points of faceform, by the face characteristic of the face characteristic of the first image and second image into The step of row exchanges, and obtains third image and four images, comprising:
The coordinate points and the coordinate points in second standard three-dimensional faceform for determining first standard three-dimensional faceform Third corresponding relationship;
According to the first corresponding relationship, the second corresponding relationship and the third corresponding relationship, determine in second image with institute State in the corresponding pixel of coordinate points and the first image of the first standard three-dimensional faceform with second standard three Tie up faceform the corresponding pixel of coordinate points, wherein first corresponding relationship be the first image in pixel with The corresponding relationship of the coordinate points of first standard three-dimensional faceform, second corresponding relationship are picture in second image The corresponding relationship of the coordinate points of vegetarian refreshments and second standard three-dimensional faceform;
Based on identified first corresponding relationship, second corresponding relationship and identified pixel, by described The face characteristic of one image and the face characteristic of second image swap, and obtain third image and the 4th image.
4. method as claimed in claim 3, which is characterized in that it is described based on identified first corresponding relationship, it is described Second corresponding relationship and identified pixel, the face of the face characteristic of the first image and second image is special The step of sign swaps, and obtains third image and four images, comprising:
According to first corresponding relationship, the pixel value of the coordinate points of first standard three-dimensional faceform is determined;
According to second corresponding relationship, the pixel value of the coordinate points of second standard three-dimensional faceform is determined;
Pixel value based on the coordinate points of second standard three-dimensional faceform is the picture in identified the first image Vegetarian refreshments assignment obtains third image;
Pixel value based on the coordinate points of first standard three-dimensional faceform is the picture in identified second image Vegetarian refreshments assignment obtains the 4th image.
5. method according to any of claims 1-4, which is characterized in that the method also includes:
Determine the mouth institute of face in the first ratio shared by the mouth of face in the third image and the 4th image The second ratio accounted for;
Judge whether first ratio and second ratio are greater than preset ratio respectively;
The target image is handled according to default processing mode for the target image that judging result is yes, wherein The target image is the third image and/or the 4th image.
6. method as claimed in claim 5, which is characterized in that it is described according to default processing mode, to the target image into The step of row processing, comprising:
According to the mouth feature point in the target image, the opening amplitude of mouth in the target image is calculated;
Judge whether the opening amplitude is greater than predetermined amplitude;
If so, carrying out tooth completion processing to the mouth in the target image.
7. method according to any of claims 1-4, which is characterized in that the method also includes:
Graph cut processing is carried out to the face part in the third image and the 4th image.
8. the face switch in a kind of image, which is characterized in that described device includes:
Image collection module, for obtaining the first image and the second image, wherein the first image and the second image packet Include face;
Obtaining three-dimensional model module, for by the deep neural network model that is obtained ahead of time to the first image and described the Two images are handled, obtain the first image and corresponding first standard three-dimensional faceform of second image and Second standard three-dimensional faceform;Wherein, the deep neural network model is model training module based on the people obtained in advance Face image sample training obtains, and the deep neural network model includes that characteristics of image is corresponding with standard three-dimensional faceform Relationship, described image feature include face characteristic and expressive features;
Face Switching Module, for the coordinate according to pixel in the first image and first standard three-dimensional faceform Pixel is corresponding with the coordinate points of second standard three-dimensional faceform in the corresponding relationship and second image of point Relationship swaps the face characteristic of the first image and the face characteristic of second image, obtain third image and 4th image.
9. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes method and step as claimed in claim 1 to 7.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer in the computer readable storage medium Program, the computer program realize method and step as claimed in claim 1 to 7 when being executed by processor.
CN201811214643.0A 2018-10-18 2018-10-18 Face exchange method and device in image and electronic equipment Active CN109492540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811214643.0A CN109492540B (en) 2018-10-18 2018-10-18 Face exchange method and device in image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811214643.0A CN109492540B (en) 2018-10-18 2018-10-18 Face exchange method and device in image and electronic equipment

Publications (2)

Publication Number Publication Date
CN109492540A true CN109492540A (en) 2019-03-19
CN109492540B CN109492540B (en) 2020-12-25

Family

ID=65691471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811214643.0A Active CN109492540B (en) 2018-10-18 2018-10-18 Face exchange method and device in image and electronic equipment

Country Status (1)

Country Link
CN (1) CN109492540B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160138A (en) * 2019-12-11 2020-05-15 杭州电子科技大学 Fast face exchange method based on convolutional neural network
CN112001355A (en) * 2020-09-03 2020-11-27 杭州云栖智慧视通科技有限公司 Training data preprocessing method for fuzzy face recognition under outdoor video
CN112070662A (en) * 2020-11-12 2020-12-11 北京达佳互联信息技术有限公司 Evaluation method and device of face changing model, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102196245A (en) * 2011-04-07 2011-09-21 北京中星微电子有限公司 Video play method and video play device based on character interaction
CN103824054A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded depth neural network-based face attribute recognition method
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN103914676A (en) * 2012-12-30 2014-07-09 杭州朗和科技有限公司 Method and apparatus for use in face recognition
CN105118024A (en) * 2015-09-14 2015-12-02 北京中科慧眼科技有限公司 Face exchange method
CN105849747A (en) * 2013-11-30 2016-08-10 北京市商汤科技开发有限公司 Method and system for face image recognition
CN106534757A (en) * 2016-11-22 2017-03-22 北京金山安全软件有限公司 Face exchange method and device, anchor terminal and audience terminal
CN107330904A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN107564080A (en) * 2017-08-17 2018-01-09 北京觅己科技有限公司 A kind of replacement system of facial image
CN107609519A (en) * 2017-09-15 2018-01-19 维沃移动通信有限公司 The localization method and device of a kind of human face characteristic point
CN107610202A (en) * 2017-08-17 2018-01-19 北京觅己科技有限公司 Marketing method, equipment and the storage medium replaced based on facial image

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102196245A (en) * 2011-04-07 2011-09-21 北京中星微电子有限公司 Video play method and video play device based on character interaction
CN103914676A (en) * 2012-12-30 2014-07-09 杭州朗和科技有限公司 Method and apparatus for use in face recognition
CN105849747A (en) * 2013-11-30 2016-08-10 北京市商汤科技开发有限公司 Method and system for face image recognition
CN103824054A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded depth neural network-based face attribute recognition method
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN105118024A (en) * 2015-09-14 2015-12-02 北京中科慧眼科技有限公司 Face exchange method
CN106534757A (en) * 2016-11-22 2017-03-22 北京金山安全软件有限公司 Face exchange method and device, anchor terminal and audience terminal
CN107330904A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN107564080A (en) * 2017-08-17 2018-01-09 北京觅己科技有限公司 A kind of replacement system of facial image
CN107610202A (en) * 2017-08-17 2018-01-19 北京觅己科技有限公司 Marketing method, equipment and the storage medium replaced based on facial image
CN107609519A (en) * 2017-09-15 2018-01-19 维沃移动通信有限公司 The localization method and device of a kind of human face characteristic point

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160138A (en) * 2019-12-11 2020-05-15 杭州电子科技大学 Fast face exchange method based on convolutional neural network
CN112001355A (en) * 2020-09-03 2020-11-27 杭州云栖智慧视通科技有限公司 Training data preprocessing method for fuzzy face recognition under outdoor video
CN112070662A (en) * 2020-11-12 2020-12-11 北京达佳互联信息技术有限公司 Evaluation method and device of face changing model, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109492540B (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN103914699B (en) A kind of method of the image enhaucament of the automatic lip gloss based on color space
CN107993216A (en) A kind of image interfusion method and its equipment, storage medium, terminal
CN107025629A (en) A kind of image processing method and mobile terminal
CN109492540A (en) Face exchange method, apparatus and electronic equipment in a kind of image
CN105184249A (en) Method and device for processing face image
CN108550176A (en) Image processing method, equipment and storage medium
CN107610209A (en) Human face countenance synthesis method, device, storage medium and computer equipment
CN107145833A (en) The determination method and apparatus of human face region
CN108305312A (en) The generation method and device of 3D virtual images
CN106469302A (en) A kind of face skin quality detection method based on artificial neural network
CN107180446A (en) The expression animation generation method and device of character face's model
CN109952594A (en) Image processing method, device, terminal and storage medium
CN107392933B (en) Image segmentation method and mobile terminal
CN108052984A (en) Method of counting and device
CN102567716B (en) Face synthetic system and implementation method
CN109191508A (en) A kind of simulation beauty device, simulation lift face method and apparatus
EP3385914A1 (en) Method of controlling a device for generating an augmented reality environment
CN107527034A (en) A kind of face contour method of adjustment and mobile terminal
CN108090450A (en) Face identification method and device
CN109064549A (en) Index point detection model generation method and mark point detecting method
CN109584153A (en) Modify the methods, devices and systems of eye
CN110956071B (en) Eye key point labeling and detection model training method and device
CN107203963A (en) A kind of image processing method and device, electronic equipment
CN107018330A (en) A kind of guidance method and device of taking pictures in real time
CN110232326A (en) A kind of D object recognition method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant