CN111488768A - Face image style conversion method and device, electronic equipment and storage medium - Google Patents
Face image style conversion method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111488768A CN111488768A CN201910080228.9A CN201910080228A CN111488768A CN 111488768 A CN111488768 A CN 111488768A CN 201910080228 A CN201910080228 A CN 201910080228A CN 111488768 A CN111488768 A CN 111488768A
- Authority
- CN
- China
- Prior art keywords
- current
- face
- image
- feature information
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 48
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 6
- 210000004709 eyebrow Anatomy 0.000 description 27
- 210000004209 hair Anatomy 0.000 description 27
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a style conversion method and device for a face image, electronic equipment and a storage medium. The method comprises the following steps: acquiring each current face feature information in each current face area image of a current user at a current angle from the face image of the current expression style; determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle; and converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. The human face image with the current expression style can be converted into the human face image with the target expression style at different angles, so that the human face image is more vivid and interesting, and the user experience effect is better.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a method and a device for converting styles of face images, electronic equipment and a storage medium.
Background
With the development of internet technology, digital entertainment products are beginning to appear around people. The digital entertainment products refer to entertainment products based on digital technology, such as cartoons, network games and the like. The digital entertainment products can convert the face image with the current expression style into the face image with the target expression style; wherein the target expression style may include: cartoon style, three-dimensional animation style, woodcut style, ancient Egypt mural style, etc. However, with the existing style conversion method for face images, the current face image can be converted into the target face image only at a predetermined angle, but the current face image cannot be converted into the target face image at different angles. There is currently no effective solution available in the prior art and there is therefore a need to overcome this problem in a manner that is as far as possible.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for converting a style of a face image, an electronic device, and a storage medium, which can convert a face image with a current expression style into a face image with a target expression style at different angles, so that the face image is more vivid and interesting, and the user experience effect is better.
In a first aspect, an embodiment of the present invention provides a method for converting styles of face images, where the method includes:
acquiring each current face feature information in each current face area image of a current user at a current angle from the face image of the current expression style;
determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle;
and converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image.
In the above embodiment, the obtaining, from the face image of the current expression style, current face feature information in each target face area image of the current user at the current angle includes:
acquiring at least one face image of the current expression style of the current user at the current angle;
and determining each piece of current face feature information in each current face area image of the current user at the current angle according to at least one current expression style face image of the current user at the current angle.
In the above embodiment, the determining, according to each piece of current face feature information in each current face region image of the current user at the current angle, a target face region image corresponding to each current face region image includes:
calculating reference face feature information matched with each piece of current face feature information in each current face region image;
and acquiring a target face area image corresponding to each current face area image according to the reference face feature information matched with each current face feature information.
In the above embodiment, the calculating reference face feature information that matches each piece of current face feature information in each current face region image includes:
calculating the matching degree of each current face feature information in each current face region image and each predetermined reference face feature information;
and if the matching degree of each piece of current face feature information and each piece of reference face feature information is greater than a preset threshold value, determining each piece of reference face feature information as the reference face feature information matched with each piece of current face feature information.
In the above embodiment, the converting the face image of the current expression style into the face image of the target expression style according to the target face area image corresponding to each current face area image includes:
acquiring image attributes of target face area images corresponding to the current face area images; wherein the image attributes include at least: a size attribute, a hue attribute, a brightness attribute, and a contrast attribute;
and converting each current face area image into a target face area image corresponding to each current face area image according to the image attribute of the target face area image corresponding to each current face area image.
In a second aspect, an embodiment of the present invention provides a style conversion apparatus for a face image, where the apparatus includes: the device comprises an acquisition module, a determination module and a conversion module; wherein,
the acquisition module is used for acquiring each piece of current face feature information in each current face area image of the current user at the current angle from the face image of the current expression style;
the determining module is used for determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle;
and the conversion module is used for converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image.
In the above embodiment, the obtaining module is specifically configured to obtain a face image of at least one current expression style of the current user at a current angle; and determining each piece of current face feature information in each current face area image of the current user at the current angle according to at least one current expression style face image of the current user at the current angle.
In the above embodiment, the determining module includes: a calculation submodule and an acquisition submodule; wherein,
the calculation submodule is used for calculating reference face feature information matched with each piece of current face feature information in each current face area image;
and the acquisition submodule is used for acquiring a target face area image corresponding to each current face area image according to the reference face feature information matched with each current face feature information.
In the above embodiment, the calculating sub-module is specifically configured to calculate a matching degree between each piece of current face feature information in each current face region image and each piece of predetermined reference face feature information; and if the matching degree of each piece of current face feature information and each piece of reference face feature information is greater than a preset threshold value, determining each piece of reference face feature information as the reference face feature information matched with each piece of current face feature information.
In the above embodiment, the conversion module is specifically configured to acquire an image attribute of a target face area image corresponding to each current face area image; wherein the image attributes include at least: a size attribute, a hue attribute, a brightness attribute, and a contrast attribute; and converting each current face area image into a target face area image corresponding to each current face area image according to the image attribute of the target face area image corresponding to each current face area image.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for converting the style of the face image according to any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement a style conversion method for a face image according to any embodiment of the present invention.
The embodiment of the invention provides a style conversion method, a style conversion device, electronic equipment and a storage medium of a face image, wherein each piece of current face feature information in each current face area image of a current user at a current angle is acquired from the face image of a current expression style; then determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle; and then converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. That is to say, in the technical solution of the present invention, a target face area image corresponding to each current face area image can be determined according to each current face feature information in each current face area image of the current user at the current angle; and then converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. However, in the existing face image style conversion method, the face image of the current expression style can only be converted into the face image of the target expression style at a certain predetermined angle, but the face image of the current expression style cannot be converted into the face image of the target expression style at a different angle. Therefore, compared with the prior art, the style conversion method, the style conversion device, the electronic equipment and the storage medium of the face image provided by the embodiment of the invention can convert the face image with the current expression style into the face image with the target expression style at different angles, so that the face image is more vivid and interesting, and the user experience effect is better; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
Drawings
Fig. 1 is a schematic flowchart of a style conversion method for a face image according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a face image style conversion method according to a second embodiment of the present invention;
fig. 3 is a schematic flow chart of a style conversion method for a face image according to a third embodiment of the present invention;
fig. 4 is a first structural schematic diagram of a face image style conversion apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic diagram of a second structure of the face image style conversion apparatus according to the fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings.
Example one
Fig. 1 is a flowchart illustrating a style conversion method for a face image according to an embodiment of the present invention, where the method may be executed by a style conversion apparatus or an electronic device for a face image, where the apparatus or the electronic device may be implemented by software and/or hardware, and the apparatus or the electronic device may be integrated in any intelligent device with a network communication function. As shown in fig. 1, the style conversion method of the face image may include the following steps:
s101, obtaining each current face feature information in each current face area image of the current user at the current angle from the face image of the current expression style.
In a specific embodiment of the present invention, the electronic device may obtain, from a face image of a current expression style, current face feature information of each current face area image of a current user at a current angle; wherein the current face region image comprises at least one of: a current nose region image, a current eye region image, a current eyebrow region image, a current mouth region image, a current ear region image, and a current hair region image. Specifically, the electronic device may acquire nose feature information in a current nose area image of a current user at a current angle; the eye feature information of the current user in the current eye area image at the current angle can be acquired; the eyebrow feature information in the current eyebrow area image at the current angle of the current user can be acquired; mouth characteristic information in the current mouth area image of the current user at the current angle can be acquired; the ear feature information of the current user in the current ear region image at the current angle can be acquired; and the hair characteristic information in the current hair area image of the current user at the current angle can be acquired.
And S102, determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle.
In a specific embodiment of the present invention, the electronic device may determine, according to each current face feature information in each current face region image of the current user at the current angle, a target face region image corresponding to each current face region image; wherein the target face region image comprises: a target nose region image, a target eye region image, a target eyebrow region image, a target mouth region image, a target ear region image, and a target hair region image. Specifically, the electronic device may determine a target nose region image corresponding to the current nose region image according to nose feature information in the current nose region image of the current user at the current angle; the corresponding target eye area image in the current eye area image can be determined according to the eye feature information of the current user in the current eye area image at the current angle; the corresponding target eyebrow area image in the current eyebrow area image can be determined according to the eyebrow feature information in the current eyebrow area image of the current user at the current angle; the corresponding target mouth area image in the current mouth area image can be determined according to the mouth characteristic information of the current user in the current mouth area image at the current angle; the corresponding target ear region image in the current ear region image can be determined according to the ear feature information in the current ear region image of the current user at the current angle; and determining a corresponding target hair area image in the current hair area image according to the hair characteristic information of the current user in the current hair area image at the current angle.
And S103, converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image.
In a specific embodiment of the present invention, the electronic device may convert the face image of the current expression style into the face image of the target expression style according to the target face area image corresponding to each current face area image; wherein the target expression style may include: cartoon style, three-dimensional animation style, woodcut style, ancient Egypt mural style, etc. Specifically, the electronic device may obtain image attributes of a target face region image corresponding to each current face region image; wherein the image attributes at least include: a size attribute, a hue attribute, a brightness attribute, and a contrast attribute; and then converting each current face region image into a target face region image corresponding to each current face region image according to the image attribute of the target face region image corresponding to each current face region image.
The style conversion method of the face image provided by the embodiment of the invention comprises the steps of firstly obtaining each current face feature information in each current face area image of a current user at a current angle from the face image of the current expression style; then determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle; and then converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. That is to say, in the technical solution of the present invention, a target face area image corresponding to each current face area image can be determined according to each current face feature information in each current face area image of the current user at the current angle; and then converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. However, in the existing face image style conversion method, the face image of the current expression style can only be converted into the face image of the target expression style at a certain predetermined angle, but the face image of the current expression style cannot be converted into the face image of the target expression style at a different angle. Therefore, compared with the prior art, the style conversion method of the face image provided by the embodiment of the invention can convert the face image with the current expression style into the face image with the target expression style at different angles, so that the face image is more vivid and interesting, and the user experience effect is better; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
Example two
Fig. 2 is a flowchart illustrating a style conversion method for a face image according to a second embodiment of the present invention. As shown in fig. 2, the style conversion method of the face image may include the following steps:
s201, obtaining at least one face image of the current expression style of the current user at the current angle.
In a specific embodiment of the present invention, the electronic device may acquire at least one face image of a current expression style of a current user at a current angle. Specifically, the electronic device may acquire a face image 1, a face image 2, and a face image M of a current user at a current angle; wherein M is a natural number of 1 or more.
S202, determining each piece of current face feature information in each current face area image of the current user at the current angle according to at least one current expression style face image of the current user at the current angle.
In a specific embodiment of the present invention, the electronic device may determine, according to at least one current expression style face image of the current user at the current angle, each current face feature information in each current face area image of the current user at the current angle. Specifically, the current face image 1 may include: a current nose region image 1, a current eye region image 1, a current eyebrow region image 1, a current mouth region image 1, a current ear region image 1, and a current hair region image 1; the current face image 2 may include: a current nose region image 2, a current eye region image 2, a current eyebrow region image 2, a current mouth region image 2, a current ear region image 2, and a current hair region image 2; …, respectively; the current face image M may include: a current nose region image M, a current eye region image M, a current eyebrow region image M, a current mouth region image M, a current ear region image M, and a current hair region image M. Therefore, the electronic device can determine the nose feature information of the current user in the current nose region image 1 at the current angle according to the current face image 1; the eye feature information of the current user in the current eye area image 1 at the current angle can be determined according to the current face image 1; the eyebrow feature information in the current eyebrow area image 1 of the current user at the current angle can be determined according to the current face image 1; the mouth feature information of the current user in the current mouth area image 1 at the current angle can be determined according to the current face image 1; the ear feature information in the current ear region image 1 of the current user at the current angle can be determined according to the current face image 1; the hair characteristic information of the current user in the current hair area image 1 at the current angle can be determined according to the current face image 1; and so on.
S203, determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle.
In a specific embodiment of the present invention, the electronic device may determine, according to each current face feature information in each current face region image of the current user at the current angle, a target face region image corresponding to each current face region image; wherein the target face region image comprises: a target nose region image, a target eye region image, a target eyebrow region image, a target mouth region image, a target ear region image, and a target hair region image. Specifically, the electronic device may determine a target nose region image corresponding to the current nose region image according to nose feature information in the current nose region image of the current user at the current angle; the corresponding target eye area image in the current eye area image can be determined according to the eye feature information of the current user in the current eye area image at the current angle; the corresponding target eyebrow area image in the current eyebrow area image can be determined according to the eyebrow feature information in the current eyebrow area image of the current user at the current angle; the corresponding target mouth area image in the current mouth area image can be determined according to the mouth characteristic information of the current user in the current mouth area image at the current angle; the corresponding target ear region image in the current ear region image can be determined according to the ear feature information in the current ear region image of the current user at the current angle; and determining a corresponding target hair area image in the current hair area image according to the hair characteristic information of the current user in the current hair area image at the current angle.
And S204, converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image.
In a specific embodiment of the present invention, the electronic device may convert the face image of the current expression style into the face image of the target expression style according to the target face area image corresponding to each current face area image; wherein the target expression style may include: cartoon style, three-dimensional animation style, woodcut style, ancient Egypt mural style, etc. Specifically, the electronic device may obtain image attributes of a target face region image corresponding to each current face region image; wherein the image attributes at least include: a size attribute, a hue attribute, a brightness attribute, and a contrast attribute; and then converting each current face region image into a target face region image corresponding to each current face region image according to the image attribute of the target face region image corresponding to each current face region image.
The style conversion method of the face image provided by the embodiment of the invention comprises the steps of firstly obtaining each current face feature information in each current face area image of a current user at a current angle from the face image of the current expression style; then determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle; and then converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. That is to say, in the technical solution of the present invention, a target face area image corresponding to each current face area image can be determined according to each current face feature information in each current face area image of the current user at the current angle; and then converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. However, in the existing face image style conversion method, the face image of the current expression style can only be converted into the face image of the target expression style at a certain predetermined angle, but the face image of the current expression style cannot be converted into the face image of the target expression style at a different angle. Therefore, compared with the prior art, the style conversion method of the face image provided by the embodiment of the invention can convert the face image with the current expression style into the face image with the target expression style at different angles, so that the face image is more vivid and interesting, and the user experience effect is better; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
EXAMPLE III
Fig. 3 is a flowchart illustrating a style conversion method for a face image according to a third embodiment of the present invention. As shown in fig. 3, the style conversion method of the face image may include the following steps:
s301, obtaining at least one face image of the current expression style of the current user at the current angle.
In a specific embodiment of the present invention, the electronic device may acquire at least one face image of a current expression style of a current user at a current angle. Specifically, the electronic device may acquire a face image 1, a face image 2, and a face image M of a current user at a current angle; wherein M is a natural number of 1 or more.
S302, determining each piece of current face feature information in each current face area image of the current user at the current angle according to at least one current expression style face image of the current user at the current angle.
In a specific embodiment of the present invention, the electronic device may determine, according to at least one current expression style face image of the current user at the current angle, each current face feature information in each current face area image of the current user at the current angle. Specifically, the current face image 1 may include: a current nose region image 1, a current eye region image 1, a current eyebrow region image 1, a current mouth region image 1, a current ear region image 1, and a current hair region image 1; the current face image 2 may include: a current nose region image 2, a current eye region image 2, a current eyebrow region image 2, a current mouth region image 2, a current ear region image 2, and a current hair region image 2; …, respectively; the current face image M may include: a current nose region image M, a current eye region image M, a current eyebrow region image M, a current mouth region image M, a current ear region image M, and a current hair region image M. Therefore, the electronic device can determine the nose feature information of the current user in the current nose region image 1 at the current angle according to the current face image 1; the eye feature information of the current user in the current eye area image 1 at the current angle can be determined according to the current face image 1; the eyebrow feature information in the current eyebrow area image 1 of the current user at the current angle can be determined according to the current face image 1; the mouth feature information of the current user in the current mouth area image 1 at the current angle can be determined according to the current face image 1; the ear feature information in the current ear region image 1 of the current user at the current angle can be determined according to the current face image 1; the hair characteristic information of the current user in the current hair area image 1 at the current angle can be determined according to the current face image 1; and so on.
And S303, calculating reference face feature information matched with each piece of current face feature information in each current face region image.
In a specific embodiment of the present invention, the electronic device may calculate reference face feature information that matches each current face feature information in each current face region image. Specifically, the electronic device may input each current face feature information in each current face region image into a predetermined calculation model, and then the calculation model may calculate the matched reference face feature information according to each current face feature information in each current face region image.
Preferably, in the embodiment of the present invention, the electronic device may calculate a matching degree between each current face feature information in each current face region image and each predetermined reference face feature information; if the matching degree of each piece of current face feature information and each piece of reference face feature information is greater than a preset threshold value, the electronic device may determine each piece of reference face feature information as the reference face feature information matched with each piece of current face feature information.
S304, acquiring target face area images corresponding to the current face area images according to the reference face feature information matched with the current face feature information.
In a specific embodiment of the present invention, the electronic device may obtain a target face region image corresponding to each current face region image according to the reference face feature information matched with each current face feature information. In an embodiment of the present invention, the reference facial feature information may include: reference nose feature information, reference eye feature information, reference eyebrow feature information, reference mouth feature information, reference ear feature information, and reference hair feature information. Therefore, the electronic device can acquire a target nose region image corresponding to the current nose region image according to the reference nose feature information; a target eye area image corresponding to the current eye area image can be acquired according to the reference eye characteristic information; the target eyebrow area image corresponding to the current eyebrow area image can be acquired according to the reference eyebrow characteristic information; a target mouth area image corresponding to the current mouth area image can be obtained according to the reference mouth characteristic information; the target ear region image corresponding to the current ear region image can be obtained according to the reference ear feature information; and acquiring a target hair area image corresponding to the current hair area image according to the reference hair characteristic information.
S305, converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image.
In a specific embodiment of the present invention, the electronic device may convert the face image of the current expression style into the face image of the target expression style according to the target face area image corresponding to each current face area image; wherein the target expression style may include: cartoon style, three-dimensional animation style, woodcut style, ancient Egypt mural style, etc. Specifically, the electronic device may obtain image attributes of a target face region image corresponding to each current face region image; wherein the image attributes at least include: a size attribute, a hue attribute, a brightness attribute, and a contrast attribute; and then converting each current face region image into a target face region image corresponding to each current face region image according to the image attribute of the target face region image corresponding to each current face region image.
The style conversion method of the face image provided by the embodiment of the invention comprises the steps of firstly obtaining each current face feature information in each current face area image of a current user at a current angle from the face image of the current expression style; then determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle; and then converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. That is to say, in the technical solution of the present invention, a target face area image corresponding to each current face area image can be determined according to each current face feature information in each current face area image of the current user at the current angle; and then converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image. However, in the existing face image style conversion method, the face image of the current expression style can only be converted into the face image of the target expression style at a certain predetermined angle, but the face image of the current expression style cannot be converted into the face image of the target expression style at a different angle. Therefore, compared with the prior art, the style conversion method of the face image provided by the embodiment of the invention can convert the face image with the current expression style into the face image with the target expression style at different angles, so that the face image is more vivid and interesting, and the user experience effect is better; moreover, the technical scheme of the embodiment of the invention is simple and convenient to realize, convenient to popularize and wider in application range.
Example four
Fig. 4 is a schematic view of a first structure of a face image style conversion apparatus according to a fourth embodiment of the present invention. As shown in fig. 4, the apparatus for generating a cartoon face image according to an embodiment of the present invention may include: an acquisition module 401, a determination module 402 and a conversion module 403; wherein,
the obtaining module 401 is configured to obtain, from a face image of a current expression style, current face feature information of each current face area image of a current user at a current angle;
the determining module 402 is configured to determine, according to each piece of current face feature information in each current face region image of the current user at the current angle, a target face region image corresponding to each current face region image;
the conversion module 403 is configured to convert the face image in the current expression style into a face image in a target expression style according to the target face area image corresponding to each current face area image.
Further, the obtaining module 401 is specifically configured to obtain at least one face image of a current expression style of the current user at a current angle; and determining each piece of current face feature information in each current face area image of the current user at the current angle according to at least one current expression style face image of the current user at the current angle.
Fig. 5 is a schematic diagram of a second structure of the face image style conversion apparatus according to the fourth embodiment of the present invention. As shown in fig. 5, the determining module 402 includes: a calculation sub-module 4021 and an acquisition sub-module 4022; wherein,
the calculating submodule 4021 is configured to calculate reference face feature information matched with each piece of current face feature information in each current face region image;
the obtaining sub-module 4022 is configured to obtain a target face region image corresponding to each current face region image according to the reference face feature information matched with each current face feature information.
Further, the calculating sub-module 4021 is specifically configured to calculate a matching degree between each piece of current face feature information in each current face region image and each piece of predetermined reference face feature information; and if the matching degree of each piece of current face feature information and each piece of reference face feature information is greater than a preset threshold value, determining each piece of reference face feature information as the reference face feature information matched with each piece of current face feature information.
Further, the conversion module 403 is specifically configured to obtain image attributes of target face region images corresponding to each current face region image; wherein the image attributes include at least: a size attribute, a hue attribute, a brightness attribute, and a contrast attribute; and converting each current face area image into a target face area image corresponding to each current face area image according to the image attribute of the target face area image corresponding to each current face area image.
The style conversion device of the face image can execute the method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. For details of the technique not described in detail in this embodiment, reference may be made to the method for converting the style of the face image provided in any embodiment of the present invention.
EXAMPLE five
Fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. FIG. 6 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 6 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 6, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, and commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
The processing unit 16 executes various functional applications and data processing, such as implementing a style conversion method for a face image provided by an embodiment of the present invention, by running a program stored in the system memory 28.
EXAMPLE six
The sixth embodiment of the invention provides a computer storage medium.
The computer-readable storage media of embodiments of the invention may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (12)
1. A style conversion method of a face image is characterized by comprising the following steps:
acquiring each current face feature information in each current face area image of a current user at a current angle from the face image of the current expression style;
determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle;
and converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image.
2. The method according to claim 1, wherein the obtaining of each current face feature information in each target face area image of the current user at the current angle from the face image of the current expression style comprises:
acquiring at least one face image of the current expression style of the current user at the current angle;
and determining each piece of current face feature information in each current face area image of the current user at the current angle according to at least one current expression style face image of the current user at the current angle.
3. The method according to claim 1, wherein the determining, according to each current face feature information in each current face region image of the current user at the current angle, a target face region image corresponding to each current face region image comprises:
calculating reference face feature information matched with each piece of current face feature information in each current face region image;
and acquiring a target face area image corresponding to each current face area image according to the reference face feature information matched with each current face feature information.
4. The method according to claim 3, wherein the calculating of the reference face feature information matching with each current face feature information in each current face region image comprises:
calculating the matching degree of each current face feature information in each current face region image and each predetermined reference face feature information;
and if the matching degree of each piece of current face feature information and each piece of reference face feature information is greater than a preset threshold value, determining each piece of reference face feature information as the reference face feature information matched with each piece of current face feature information.
5. The method of claim 1, wherein converting the current appearance style face image into the target appearance style face image according to the target face area image corresponding to each current face area image comprises:
acquiring image attributes of target face area images corresponding to the current face area images; wherein the image attributes include at least: a size attribute, a hue attribute, a brightness attribute, and a contrast attribute;
and converting each current face area image into a target face area image corresponding to each current face area image according to the image attribute of the target face area image corresponding to each current face area image.
6. An apparatus for converting a style of a face image, the apparatus comprising: the device comprises an acquisition module, a determination module and a conversion module; wherein,
the acquisition module is used for acquiring each piece of current face feature information in each current face area image of the current user at the current angle from the face image of the current expression style;
the determining module is used for determining target face area images corresponding to the current face area images according to the current face feature information in the current face area images of the current user at the current angle;
and the conversion module is used for converting the face image with the current expression style into the face image with the target expression style according to the target face area image corresponding to each current face area image.
7. The apparatus of claim 6, wherein:
the acquisition module is specifically used for acquiring at least one face image of the current expression style of the current user at the current angle; and determining each piece of current face feature information in each current face area image of the current user at the current angle according to at least one current expression style face image of the current user at the current angle.
8. The apparatus of claim 6, wherein the determining module comprises: a calculation submodule and an acquisition submodule; wherein,
the calculation submodule is used for calculating reference face feature information matched with each piece of current face feature information in each current face area image;
and the acquisition submodule is used for acquiring a target face area image corresponding to each current face area image according to the reference face feature information matched with each current face feature information.
9. The apparatus of claim 8, wherein:
the calculating submodule is specifically used for calculating the matching degree of each piece of current face feature information in each current face area image and each piece of predetermined reference face feature information; and if the matching degree of each piece of current face feature information and each piece of reference face feature information is greater than a preset threshold value, determining each piece of reference face feature information as the reference face feature information matched with each piece of current face feature information.
10. The apparatus of claim 6, wherein:
the conversion module is specifically used for acquiring the image attributes of the target face area images corresponding to the current face area images; wherein the image attributes include at least: a size attribute, a hue attribute, a brightness attribute, and a contrast attribute; and converting each current face area image into a target face area image corresponding to each current face area image according to the image attribute of the target face area image corresponding to each current face area image.
11. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of style conversion for a face image as claimed in any one of claims 1 to 5.
12. A storage medium on which a computer program is stored, which program, when being executed by a processor, is characterized by carrying out a method of style conversion of a face image according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910080228.9A CN111488768B (en) | 2019-01-28 | 2019-01-28 | Style conversion method and device for face image, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910080228.9A CN111488768B (en) | 2019-01-28 | 2019-01-28 | Style conversion method and device for face image, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111488768A true CN111488768A (en) | 2020-08-04 |
CN111488768B CN111488768B (en) | 2023-09-05 |
Family
ID=71795894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910080228.9A Active CN111488768B (en) | 2019-01-28 | 2019-01-28 | Style conversion method and device for face image, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111488768B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114880057A (en) * | 2022-04-22 | 2022-08-09 | 北京三快在线科技有限公司 | Image display method, image display device, terminal, server, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156993A (en) * | 2014-07-18 | 2014-11-19 | 小米科技有限责任公司 | Method and device for switching face image in picture |
CN108062404A (en) * | 2017-12-28 | 2018-05-22 | 奇酷互联网络科技(深圳)有限公司 | Processing method, device, readable storage medium storing program for executing and the terminal of facial image |
WO2018113523A1 (en) * | 2016-12-24 | 2018-06-28 | 深圳云天励飞技术有限公司 | Image processing method and device, and storage medium |
CN108696637A (en) * | 2018-05-03 | 2018-10-23 | 上海闻泰电子科技有限公司 | Method for information display, device, server and storage medium |
CN108846793A (en) * | 2018-05-25 | 2018-11-20 | 深圳市商汤科技有限公司 | Image processing method and terminal device based on image style transformation model |
CN109272579A (en) * | 2018-08-16 | 2019-01-25 | Oppo广东移动通信有限公司 | Makeups method, apparatus, electronic equipment and storage medium based on threedimensional model |
-
2019
- 2019-01-28 CN CN201910080228.9A patent/CN111488768B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104156993A (en) * | 2014-07-18 | 2014-11-19 | 小米科技有限责任公司 | Method and device for switching face image in picture |
WO2018113523A1 (en) * | 2016-12-24 | 2018-06-28 | 深圳云天励飞技术有限公司 | Image processing method and device, and storage medium |
CN108062404A (en) * | 2017-12-28 | 2018-05-22 | 奇酷互联网络科技(深圳)有限公司 | Processing method, device, readable storage medium storing program for executing and the terminal of facial image |
CN108696637A (en) * | 2018-05-03 | 2018-10-23 | 上海闻泰电子科技有限公司 | Method for information display, device, server and storage medium |
CN108846793A (en) * | 2018-05-25 | 2018-11-20 | 深圳市商汤科技有限公司 | Image processing method and terminal device based on image style transformation model |
CN109272579A (en) * | 2018-08-16 | 2019-01-25 | Oppo广东移动通信有限公司 | Makeups method, apparatus, electronic equipment and storage medium based on threedimensional model |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114880057A (en) * | 2022-04-22 | 2022-08-09 | 北京三快在线科技有限公司 | Image display method, image display device, terminal, server, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111488768B (en) | 2023-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108010112B (en) | Animation processing method, device and storage medium | |
CN108874136B (en) | Dynamic image generation method, device, terminal and storage medium | |
CN113362263B (en) | Method, apparatus, medium and program product for transforming an image of a virtual idol | |
CN114187633A (en) | Image processing method and device, and training method and device of image generation model | |
CN104737198B (en) | The result of visibility test is recorded in input geometric object granularity | |
WO2020029467A1 (en) | Video frame processing method and apparatus | |
CN109698914A (en) | A kind of lightning special efficacy rendering method, device, equipment and storage medium | |
CN113379885B (en) | Virtual hair processing method and device, readable storage medium and electronic equipment | |
CN112766215A (en) | Face fusion method and device, electronic equipment and storage medium | |
CN114049674A (en) | Three-dimensional face reconstruction method, device and storage medium | |
CN109657127B (en) | Answer obtaining method, device, server and storage medium | |
CN111815748B (en) | Animation processing method and device, storage medium and electronic equipment | |
CN111488768A (en) | Face image style conversion method and device, electronic equipment and storage medium | |
WO2021208170A1 (en) | Method and apparatus for determining target algorithm in vr scene, and computing device | |
CN114612602A (en) | Method and device for determining transparency, electronic equipment and storage medium | |
CN112714337A (en) | Video processing method and device, electronic equipment and storage medium | |
CN109857244B (en) | Gesture recognition method and device, terminal equipment, storage medium and VR glasses | |
CN109461203B (en) | Gesture three-dimensional image generation method and device, computer equipment and storage medium | |
CN112528707A (en) | Image processing method, device, equipment and storage medium | |
CN110264431A (en) | Video beautification method, device and electronic equipment | |
CN110288552A (en) | Video beautification method, device and electronic equipment | |
CN112465692A (en) | Image processing method, device, equipment and storage medium | |
CN113223128B (en) | Method and apparatus for generating image | |
CN112053450B (en) | Text display method and device, electronic equipment and storage medium | |
CN109190048B (en) | Wearing object recommendation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |