CN111860044A - Face changing method, device and equipment and computer storage medium - Google Patents

Face changing method, device and equipment and computer storage medium Download PDF

Info

Publication number
CN111860044A
CN111860044A CN201910344573.9A CN201910344573A CN111860044A CN 111860044 A CN111860044 A CN 111860044A CN 201910344573 A CN201910344573 A CN 201910344573A CN 111860044 A CN111860044 A CN 111860044A
Authority
CN
China
Prior art keywords
face
image
changing
expression
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910344573.9A
Other languages
Chinese (zh)
Inventor
覃威宁
郑天祥
周润楠
王山虎
张涛
唐杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momo Information Technology Co ltd
Original Assignee
Beijing Momo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Momo Information Technology Co ltd filed Critical Beijing Momo Information Technology Co ltd
Priority to CN201910344573.9A priority Critical patent/CN111860044A/en
Publication of CN111860044A publication Critical patent/CN111860044A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Abstract

The invention discloses a face changing method, a face changing device, face changing equipment and a computer storage medium. The method comprises the following steps: obtaining a first face image and a second face image; identifying key points in the second face image, and extracting a block diagram of a region defined by the key points to obtain a first expression posture diagram; and inputting the first expression attitude diagram and the first face image into a generation type confrontation network GAN model to obtain a face changing image obtained by fusing the first expression attitude diagram and the first face image. According to the embodiment of the invention, the face changing operation can be carried out by stripping the expression on the target face, so that the natural degree of face changing is improved. The invention also discloses a device, equipment and a computer storage medium based on the method.

Description

Face changing method, device and equipment and computer storage medium
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a face changing method, a face changing device, face changing equipment and a computer storage medium.
Background
With the development of networks, social software with the nature of mass entertainment is increasing, and in many social software with the functions of live broadcast, video shooting, image editing and the like, face exchange gradually becomes a new hotspot of mass entertainment, and has increasingly wide application scenes. Face exchange or face change techniques refer to changing one person's face into another person's face in an image or video.
However, in the current face changing technology, the situation that the expression of the face is unnatural exists in an image or a video obtained by changing the face, and the face changing effect is influenced.
Disclosure of Invention
The embodiment of the invention provides a face changing method, a face changing device, face changing equipment and a computer storage medium, which can change faces by stripping expressions on a target face, and improve the natural degree of face changing.
In one aspect, an embodiment of the present invention provides a face changing method, where the method includes:
obtaining a first face image and a second face image;
identifying key points in the second face image, and extracting a block diagram of a region defined by the key points to obtain a first expression posture diagram;
and inputting the first expression attitude diagram and the first face image into a generation type confrontation network GAN model to obtain a face changing image obtained by fusing the first expression attitude diagram and the first face image.
On the other hand, the embodiment of the invention provides a face changing device, which comprises:
the image obtaining module is used for obtaining a first face image and a second face image;
the expression extraction module is used for identifying key points in the second face image and extracting a block diagram of a region defined by the key points to obtain a first expression posture diagram;
And the expression fusion module is used for inputting the first expression attitude diagram and the first face image into a generation type confrontation network GAN model to obtain a face changing image obtained by fusing the first expression attitude diagram and the first face image.
In another aspect, an embodiment of the present invention provides a face changing device, where the face changing device includes:
a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a method of changing a face on a human face as any one of the above.
In yet another aspect, an embodiment of the present invention provides a computer storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the method for changing a face of a person according to any one of the above methods is implemented.
The face changing method, the face changing device, the face changing equipment and the computer storage medium can extract the block diagram of the area defined by the key points in the replaced second face image to obtain the first expression posture image, so that the first expression posture image only contains the expression information of the key parts in the second face image; and then inputting the first expression attitude diagram and the first face image into a trained GAN network for fusion, thus obtaining a face changing image obtained by fusing the first expression attitude diagram and the first face image. And at the moment, the face changing image is a result image obtained by replacing the second face image with the first face image. According to the invention, the expression information in the second face image is extracted independently, and the expression information can reflect the expression of the face and the attitude angle of the face, so that the first face image is directly adjusted according to the extracted first expression attitude image, a natural face-changing image with the expression and attitude of the second face image can be obtained, the natural degree of the expression and attitude of the person after face changing is improved as much as possible, and the face-changing effect is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a face changing method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a GAN model training process according to an embodiment of the present invention;
fig. 3 is a schematic diagram of the operation of the GAN network;
FIG. 4 is a schematic diagram of a first expression gesture map;
fig. 5 is a schematic structural diagram of a face changing device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a face changing device according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In order to solve the problem of the prior art, embodiments of the present invention provide a face changing method, apparatus, device and computer storage medium. First, a face changing method provided by the embodiment of the present invention is described below.
Fig. 1 is a schematic flow chart illustrating a face changing method according to an embodiment of the present invention. The method comprises the following steps:
Step s 11: obtaining a first face image and a second face image;
the face changing operation means that the face A is used for changing the face B in the image or the video, so that the face A in the image or the video after face changing can present the expression of the original face B and is natural as much as possible. Wherein, the first face image refers to the image of the face A, and the second face image refers to the image of the face B; the first face image and the second face image referred to in the present invention are images that are only referred to as face portions, and are not limited to containing human hair and background portions.
Step s 12: identifying key points in the second face image, and extracting a block diagram of a region defined by the key points to obtain a first expression posture diagram;
FIG. 4 is a schematic diagram of a first expression gesture map; the key points in the face image refer to feature points of a part capable of representing face features, are used for distinguishing different faces, and are used for obtaining the key points so as to determine the shapes of all parts of the face, and then the expression and the posture of the face image can be represented accordingly, wherein the expression refers to features such as laughing, smiling, hurting and the like, and the posture refers to face angle features such as turning left and right, turning up and down, head distortion in a plane, face correction and the like. And the block diagram of the area defined by the key points on the face is only used for indicating the outline of the area defined by the key points on the face, and the block diagram is only an image formed by connecting feature points and does not contain face attributes such as skin, illumination and the like. Therefore, by extracting the above block diagram, the expression information on the second face image can be extracted.
Step s 13: and inputting the first expression attitude diagram and the first face image into a generation type confrontation network GAN model to obtain a face changing image obtained by fusing the first expression attitude diagram and the first face image.
According to the face changing method, the block diagram of the area defined by the key points in the replaced second face image can be extracted to obtain the first expression posture image, so that the first expression posture image only contains the expression information of the key parts in the second face image; and then inputting the first expression attitude diagram and the first facial image into a trained GAN network for fusion to obtain a face-changing image obtained by fusing the first expression attitude diagram and the first facial image, wherein the face-changing image is a result diagram obtained by replacing the second facial image with the first facial image. According to the invention, the expression information in the second face image is extracted independently, and the expression and the posture (face angle and the like) of the face can be represented through the expression information of the face, so that the expression and the posture of the first face image are directly adjusted according to the extracted expression information, the face-changed image with the expression and the posture of the second face image can be obtained, the natural degree of the expression and the posture of a person after face changing can be ensured as far as possible, and the face-changing effect is better.
Fig. 3 is a schematic diagram of an operation process of the GAN network; the GAN network comprises a G network and a D network, wherein G is a generator used for generating data; d is a discriminator, which is responsible for judging whether the data is a true number or not. In the GAN network training process, the process is as follows: and G generates data and then inputs D, D compares the data generated by G with the true value data, and adjusts the parameters of G and the parameters of G according to the comparison error. The parameters of G and D may be adjusted by a gradient ascending method or a gradient descending method, and the present invention is not limited to the adjustment algorithm of the internal parameters of the GAN network.
In the invention, the first expression gesture graph is used as a condition factor of the GAN model to be input, so that the output of the GAN model is controllable, and the expected effect of the whole face changing process can be ensured.
In a preferred embodiment, in step s12, the process of extracting the block diagram of the region defined by the keypoint includes: extracting and connecting each key point to obtain a plurality of closed graphs formed by the key points; and filling the closed graph by using a preset color to obtain a first expression gesture graph.
Specifically, since the foregoing embodiments of the present invention mention that, in order to enable the key points to determine the position and shape of the facial part, after the key point combinations at each part are connected, a closed figure (e.g., an eye) that can surround the corresponding part should be formed. Then, because the expression gesture diagram that the invention wants to obtain is a block diagram, that is, only a contour diagram, each closed graph needs to be filled with a preset color, and then each closed graph after the color filling is completed is extracted, so that the first expression gesture diagram is obtained.
It can be further understood that if the two closed patterns are symmetrical in position, the two closed patterns are filled with different colors.
Because there are symmetric parts on the face, such as eyebrows, eyes, etc., and if these symmetric parts are not distinguished, the subsequent GAN network may not be able to distinguish left and right, if left and right are mistaken, for example, if the shape of the left eyebrow is fused with the right eyebrow of the first face image, the expression in the finally obtained face-changed image is obviously different from that of the second face image, thereby resulting in an unnatural face-changed image. Therefore, in order to avoid the problem of left-right inversion, the present embodiment fills the symmetric closed images with different colors, thereby facilitating the identification of the subsequent GAN network. It should be noted that, in the present invention, the filling color corresponding to the closed graph at each position is already set, for example, the closed graph at the right eyebrow is filled with green, and the closed graph at the left eyebrow is filled with red, and then the subsequent GAN model identifies the color and the whole position of each closed graph, so as to determine the position corresponding to the closed graph, and further fuse with the first face image. Of course, the filling color is specifically set at each position, and the present invention is not limited thereto.
In a preferred embodiment, the keypoint delineated regions include the eyebrow region, the eye region, the nose region, and the lip region.
Specifically, in the above embodiment, the region defined by the key points is defined as the region of the five sense organs, because the five sense organs are the parts that can reflect the facial expression most, and when the face is in different angular poses, the shapes of the five sense organs are obviously different, so the pose of the face can be reflected according to the shapes of the five sense organs. The key points are used for defining the eyebrow areas, so that for many people, the eyebrows can move correspondingly besides the eyes, the nose and the lips when people take different expressions and postures, and therefore the eyebrow information can reflect the expression information.
In addition, in a preferred embodiment, the region delineated by the keypoints may also contain facial contours.
This is because the face contours of different persons are different, so that there is an incompatibility between the replaced face and the body of the person if the face contour is not replaced in the face changing process, resulting in an unnatural face changing result. In addition, the shape of the face outline can also intuitively reflect the angular posture condition of the human face.
The above are only two specific embodiments, and are mainly applied to scenes in which all faces are replaced. In some scenes, such as beauty scenes, the user may only need to change eyes, such as to change his eyes with the eyes of a star, in which case the area defined by the key points may only include the eye area. Of course, the area defined by the key points depends on the requirements of the application scenario where the key points are located, and the invention is not limited to this.
In addition, the number of the key points in the present invention may be set to 68, 87 or 137, and of course, other values may be set, and the number of the key points is not limited in the present invention.
Fig. 2 is a flowchart illustrating a GAN model training process according to an embodiment of the present invention; the training process of the GAN network comprises the following steps:
step s 21: selecting two face images belonging to the same person as a third face image and a fourth face image respectively;
step s 22: recognizing key points in the fourth face image, and extracting a block diagram of a region defined by the key points to obtain a second expression posture image;
step s 23: inputting the second expression posture image and the third face image into a GAN network to be trained to obtain a face changing image to be tested;
Step s 24: comparing the face-changing image to be tested with the fourth face image, and adjusting internal parameters of the GAN network according to comparison errors;
step s 25: and judging whether the current comparison error meets the preset error requirement, if so, obtaining a GAN model, otherwise, selecting different face images, and returning to the step s 21. In the step, different face images are selected to repeat the training operation until the obtained comparison error meets the preset error requirement, and the current GAN network is the trained GAN model.
In this embodiment, the same two face images are selected during each training, which is to be done because a true value needs to be set in the GAN network training process, that is, it is theoretically desirable to finally output a reachable target image, but because the present invention aims to improve the face changing method which is not natural enough in effect at present, if two face images which do not belong to the same person are selected, it is difficult to obtain the true value corresponding to the two face images, and if two face images with the same expression are selected, the acquisition of the training sample is complicated. Therefore, in this embodiment, two facial images of the same person with different expressions are directly adopted, so that after the face in the facial image 1 is fused with the expression in the facial image 2, the theoretical true value is the facial image 2, that is, the fourth facial image. The training mode simplifies the acquisition process of training samples and truth values and facilitates the training of the GAN model.
In addition, when the training sample is obtained, multiple groups of face images of the same person can be included, for example, 10 groups of face images of a woman, and each group of face images includes two face images with different expressions; alternatively, the training sample may also include multiple sets of facial images of different people, such as 1 set of facial images of a lady, one set of facial images of a king lady, and so on. Of course, the above are only some specific implementation manners, and the present invention does not limit the composition of the training samples.
In other embodiments, in the process of training the GAN model, expression information in a face-changing image to be tested, which is output by the GAN network, may also be extracted to obtain an expression posture diagram to be tested, and then the second expression posture diagram is used as a true value to be compared with the expression posture diagram to be tested to determine a comparison error. In this case, the training sample is not limited to two face images of the same person, but any two face images, which further simplifies the acquisition process of the training sample. Of course, the above are only some specific implementation manners, and the invention is not limited to the specific training process of the GAN model.
Based on the analysis of the training process, due to the current point-to-point face changing mode, the used model can only change the face of two specific people or can only change the face of a fixed person. In the invention, the face image and the expression gesture image can be fused in a proper way after being input subsequently through training the GAN network, so that the face changing image with natural expression is obtained. The invention only trains the fusion operation and is not limited to who the face image is, so the invention is not limited to face changing for two fixed persons, but can face changing for any two persons, has small limitation, can realize many-to-many face changing conditions and has wider application range. In addition, compared with a face changing mode that pixel points are replaced one by one, the face changing method is simple in process and high in face changing speed and efficiency.
In an embodiment, after obtaining the face-changed image in step s13, the method further includes: and replacing the face part in the original image corresponding to the second face image by the face changing image to obtain a face changing finished image.
Specifically, after the face-changed image is obtained, since the face-changed image only includes the face part, the face-changed image is required to replace the face part in the original image corresponding to the second face image, so as to obtain a face-changed image.
And the original image corresponding to the second face image is a video frame in the video to be changed. Of course, the original image corresponding to the second face image is specifically an independent image, or a video frame in the video is related to the actual requirement of the user, which is not limited in the present invention.
An embodiment of the present invention provides a face changing device, and fig. 3 is a schematic structural diagram of the face changing device provided in an embodiment of the present invention; the device includes:
the image acquisition module 1 is used for acquiring a first face image and a second face image;
the expression extraction module 2 is used for identifying key points in the second face image and extracting a block diagram of a region defined by the key points to obtain a first expression posture diagram;
and the expression fusion module 3 is used for inputting the first expression gesture graph and the first face image into a generation type confrontation network GAN model to obtain a face changing image obtained by fusing the first expression gesture graph and the first face image.
Fig. 6 shows a schematic diagram of a hardware structure of a face changing device according to an embodiment of the present invention.
The face-changing device may comprise a processor 301 and a memory 302 in which computer program instructions are stored. The processor 301 reads and executes the computer program instructions stored in the memory 302 to implement any one of the face changing methods in the above embodiments.
In particular, the processor 301 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
Memory 302 may include mass storage for data or instructions. By way of example, and not limitation, memory 302 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 302 may include removable or non-removable (or fixed) media, where appropriate. The memory 302 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 302 is a non-volatile solid-state memory. In a particular embodiment, the memory 302 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
In one example, the face-changing device may also include a communication interface 303 and a bus 310. As shown in fig. 6, the processor 301, the memory 302, and the communication interface 303 are connected via a bus 310 to complete communication therebetween.
The communication interface 303 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiment of the present invention.
Bus 310 includes hardware, software, or both to couple the components of the online data traffic billing device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 310 may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
In addition, in combination with the face changing method in the foregoing embodiment, the embodiment of the present invention may provide a computer storage medium to implement. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the face changing methods in the above embodiments.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (10)

1. A face changing method is characterized by comprising the following steps:
obtaining a first face image and a second face image;
identifying key points in the second face image, and extracting a block diagram of a region defined by the key points to obtain a first expression posture image;
And inputting the first expression posture diagram and the first face image into a generation type confrontation network GAN model to obtain a face changing image obtained by fusing the first expression posture diagram and the first face image.
2. The face changing method according to claim 1, wherein the process of extracting the block diagram of the region defined by the keypoints comprises:
extracting and connecting each key point to obtain a plurality of closed graphs formed by the key points;
and filling the closed graph with a preset color to obtain the first expression posture graph.
3. The face changing method according to claim 2, wherein if the two closed figures are symmetrical in position, the two closed figures are respectively filled with different colors.
4. The face changing method according to claim 1, wherein the training process of the GAN network comprises:
selecting two face images belonging to the same person as a third face image and a fourth face image respectively;
recognizing key points in the fourth face image, and extracting a block diagram of a region defined by the key points to obtain a second expression posture image;
inputting the second expression gesture graph and the third face image into a to-be-trained GAN network to obtain a to-be-tested face changing image;
Comparing the face-changing image to be tested with the fourth face image, and adjusting the internal parameters of the GAN network according to comparison errors;
and selecting different face images, and repeating the operations until the obtained comparison error meets the preset error requirement to obtain the GAN model.
5. The face changing method according to claim 1, wherein after obtaining the face changing image, the method further comprises:
and replacing the face part in the original image corresponding to the second face image by the face-changing image to obtain a face-changing finished image.
6. The face changing method according to claim 5, wherein the original image corresponding to the second face image is a video frame in a video to be changed.
7. The face changing method according to claim 1, wherein the regions defined by the key points comprise an eyebrow region, an eye region, a nose region, and a lip region.
8. A face changing device, the device comprising:
the image obtaining module is used for obtaining a first face image and a second face image;
the expression extraction module is used for identifying key points in the second face image and extracting a block diagram of a region defined by the key points to obtain a first expression posture diagram;
And the expression fusion module is used for inputting the first expression gesture graph and the first face image into a generation type confrontation network GAN model to obtain a face changing image obtained by fusing the first expression gesture graph and the first face image.
9. A face changing device, the device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a method of changing a face on a human face as claimed in any one of claims 1 to 7.
10. A computer storage medium having computer program instructions stored thereon which, when executed by a processor, implement a face resurfacing method as claimed in any one of claims 1 to 7.
CN201910344573.9A 2019-04-26 2019-04-26 Face changing method, device and equipment and computer storage medium Pending CN111860044A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910344573.9A CN111860044A (en) 2019-04-26 2019-04-26 Face changing method, device and equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910344573.9A CN111860044A (en) 2019-04-26 2019-04-26 Face changing method, device and equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN111860044A true CN111860044A (en) 2020-10-30

Family

ID=72951734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910344573.9A Pending CN111860044A (en) 2019-04-26 2019-04-26 Face changing method, device and equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111860044A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101320A (en) * 2020-11-18 2020-12-18 北京世纪好未来教育科技有限公司 Model training method, image generation method, device, equipment and storage medium
CN113033442A (en) * 2021-03-31 2021-06-25 清华大学 StyleGAN-based high-freedom face driving method and device
CN113326821A (en) * 2021-08-03 2021-08-31 北京奇艺世纪科技有限公司 Face driving method and device for video frame image
CN113486944A (en) * 2021-07-01 2021-10-08 深圳市英威诺科技有限公司 Face fusion method, device, equipment and storage medium
CN114007099A (en) * 2021-11-04 2022-02-01 北京搜狗科技发展有限公司 Video processing method and device for video processing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014228937A (en) * 2013-05-20 2014-12-08 コニカミノルタ株式会社 Image processing apparatus, image processing method and computer program
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN109087379A (en) * 2018-08-09 2018-12-25 北京华捷艾米科技有限公司 The moving method of human face expression and the moving apparatus of human face expression
CN109151340A (en) * 2018-08-24 2019-01-04 太平洋未来科技(深圳)有限公司 Method for processing video frequency, device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014228937A (en) * 2013-05-20 2014-12-08 コニカミノルタ株式会社 Image processing apparatus, image processing method and computer program
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN109087379A (en) * 2018-08-09 2018-12-25 北京华捷艾米科技有限公司 The moving method of human face expression and the moving apparatus of human face expression
CN109151340A (en) * 2018-08-24 2019-01-04 太平洋未来科技(深圳)有限公司 Method for processing video frequency, device and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101320A (en) * 2020-11-18 2020-12-18 北京世纪好未来教育科技有限公司 Model training method, image generation method, device, equipment and storage medium
CN113033442A (en) * 2021-03-31 2021-06-25 清华大学 StyleGAN-based high-freedom face driving method and device
CN113486944A (en) * 2021-07-01 2021-10-08 深圳市英威诺科技有限公司 Face fusion method, device, equipment and storage medium
CN113326821A (en) * 2021-08-03 2021-08-31 北京奇艺世纪科技有限公司 Face driving method and device for video frame image
CN113326821B (en) * 2021-08-03 2021-10-01 北京奇艺世纪科技有限公司 Face driving method and device for video frame image
CN114007099A (en) * 2021-11-04 2022-02-01 北京搜狗科技发展有限公司 Video processing method and device for video processing

Similar Documents

Publication Publication Date Title
CN111860044A (en) Face changing method, device and equipment and computer storage medium
CN109376582B (en) Interactive face cartoon method based on generation of confrontation network
CN110147721B (en) Three-dimensional face recognition method, model training method and device
CN110379020B (en) Laser point cloud coloring method and device based on generation countermeasure network
CN106599837A (en) Face identification method and device based on multi-image input
CN103324880B (en) Certification device and the control method of certification device
Wang et al. Learning deep conditional neural network for image segmentation
CN109635783A (en) Video monitoring method, device, terminal and medium
JP2017062778A (en) Method and device for classifying object of image, and corresponding computer program product and computer-readable medium
CN108629339A (en) Image processing method and related product
CN110263768A (en) A kind of face identification method based on depth residual error network
CN113194359B (en) Method, device, equipment and medium for automatically grabbing baby wonderful video highlights
CN112507978B (en) Person attribute identification method, device, equipment and medium
CN111832372A (en) Method and device for generating three-dimensional face model simulating user
CN111833236A (en) Method and device for generating three-dimensional face model simulating user
CN111860045A (en) Face changing method, device and equipment and computer storage medium
CN113657195A (en) Face image recognition method, face image recognition equipment, electronic device and storage medium
CN114283052A (en) Method and device for cosmetic transfer and training of cosmetic transfer network
CN105224936B (en) A kind of iris feature information extracting method and device
CN111108508B (en) Face emotion recognition method, intelligent device and computer readable storage medium
CN111401193A (en) Method and device for obtaining expression recognition model and expression recognition method and device
CN110008876A (en) A kind of face verification method based on data enhancing and Fusion Features
CN108764149A (en) A kind of training method for class student faceform
CN113012030A (en) Image splicing method, device and equipment
CN116030517A (en) Model training method, face recognition device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination