CN109993807B - Head portrait generation method, device and storage medium - Google Patents

Head portrait generation method, device and storage medium Download PDF

Info

Publication number
CN109993807B
CN109993807B CN201910130415.3A CN201910130415A CN109993807B CN 109993807 B CN109993807 B CN 109993807B CN 201910130415 A CN201910130415 A CN 201910130415A CN 109993807 B CN109993807 B CN 109993807B
Authority
CN
China
Prior art keywords
face
user
face feature
features
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910130415.3A
Other languages
Chinese (zh)
Other versions
CN109993807A (en
Inventor
林成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910130415.3A priority Critical patent/CN109993807B/en
Publication of CN109993807A publication Critical patent/CN109993807A/en
Application granted granted Critical
Publication of CN109993807B publication Critical patent/CN109993807B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a head portrait generating method, device and storage medium. The method comprises the following steps: determining a first face feature representing the facial features of a first user from a plurality of face features of the first user; amplifying the first face features, and generating cartoon head images for the first user according to the amplified first face features. The cartoon head portrait generated by the invention can highlight the facial characteristics of the first user, thereby being capable of better identifying the first user.

Description

Head portrait generation method, device and storage medium
Technical Field
The present invention relates to the field of face recognition, and in particular, to a method and apparatus for generating an avatar, and a storage medium.
Background
Currently, a user may upload his own avatar in a web application to identify the user with the avatar.
In the prior art, the head portrait of the user can be a real head portrait consistent with the real image of the user, and can also be a virtual head portrait irrelevant to the real image of the user. For example, the user may upload a photograph of himself as a real avatar, or the user may upload a cartoon as a virtual avatar.
However, in the prior art, cartoon is used as a virtual head portrait, and the facial features of the user are not displayed, so that the user cannot be identified better.
Disclosure of Invention
The invention provides a head portrait generating method, device and storage medium, which are used for solving the problem that the facial features of a user are not displayed in the prior art when cartoon is used as a virtual head portrait, so that the user cannot be identified better.
In a first aspect, the present invention provides a method for generating an avatar, including:
determining a first face feature representing the facial features of a first user from a plurality of face features of the first user;
amplifying the first face features, and generating cartoon head images for the first user according to the amplified first face features.
In one possible implementation, the determining, from a plurality of face features of a first user, a first face feature representing a face feature of the first user includes:
according to the face feature set, determining a first face feature representing the face feature of the first user from a plurality of face features of the first user; the face feature set comprises a plurality of face features of the second user.
In one possible implementation, the determining, according to the face feature set, a first face feature representing a facial feature of the first user from a plurality of face features of the first user includes:
judging whether the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold value; the first number is the number of the second users, the similarity with the target face features of the first user in the face feature set is larger than or equal to a first similarity threshold value, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold, the target face feature of the first user is the first face feature.
In one possible implementation, the method further comprises:
determining a second face feature from a plurality of face features of the first user according to the face feature set, wherein the second face feature and the first face feature are face features of different parts;
shrinking the second face features;
The generating a cartoon head image for the first user according to the amplified first face feature comprises the following steps:
and generating a cartoon head image for the first user according to the amplified first face feature and the contracted second face feature.
In one possible implementation, the determining a second face feature from the face features of the first user according to the face feature set includes
Judging whether the ratio of the third quantity to the second quantity is larger than or equal to a second ratio threshold value; the third number is the number of the second users, the similarity with the target face features of the first user in the face feature set is greater than or equal to a second similarity threshold, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the third quantity to the second quantity is greater than or equal to a second ratio threshold, the target face feature of the first user is the second face feature.
In one possible implementation, the generating a cartoon head image for the first user according to the enlarged first face feature includes:
And generating cartoon head images for the first user according to the third face features and the amplified first face features, wherein the third face features and the first face features are face features of different parts.
In one possible implementation, the third face feature is a preset face feature.
In a second aspect, the present invention provides an avatar generating apparatus comprising:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a first face characteristic representing the face characteristics of a first user from a plurality of face characteristics of the first user;
the generation module is used for amplifying the first face characteristics determined by the determination module and generating cartoon head images for the first user according to the amplified first face characteristics.
In one possible implementation, the determining module is specifically configured to:
according to the face feature set, determining a first face feature representing the face feature of the first user from a plurality of face features of the first user; the face feature set comprises a plurality of face features of the second user.
In one possible implementation, the determining module is configured to determine, according to a face feature set, a first face feature representing a facial feature of the first user from a plurality of face features of the first user, and specifically includes:
Judging whether the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold value; the first number is the number of the second users, the similarity with the target face features of the first user in the face feature set is larger than or equal to a first similarity threshold value, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold, the target face feature of the first user is the first face feature.
In one possible implementation, the determining module is further configured to:
determining a second face feature from a plurality of face features of the first user according to the face feature set, wherein the second face feature and the first face feature are face features of different parts;
the generating module is further configured to reduce the second face feature;
the generation module is configured to generate a cartoon head image for the first user according to the amplified first face feature, and specifically includes: and generating a cartoon head image for the first user according to the amplified first face feature and the contracted second face feature.
In one possible implementation, the determining module is configured to determine, according to the face feature set, a second face feature from a plurality of face features of the first user, and specifically includes:
judging whether the ratio of the third quantity to the second quantity is larger than or equal to a second ratio threshold value; the third number is the number of the second users, the similarity with the target face features of the first user in the face feature set is greater than or equal to a second similarity threshold, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the third quantity to the second quantity is greater than or equal to a second ratio threshold, the target face feature of the first user is the second face feature.
In one possible implementation, the generating module is configured to generate a cartoon head image for the first user according to the amplified first face feature, and specifically includes:
and generating cartoon head images for the first user according to the third face features and the amplified first face features, wherein the third face features and the first face features are face features of different parts.
In one possible implementation, the third face feature is a preset face feature.
In a third aspect, the present invention provides an avatar generating apparatus, comprising:
a processor and a memory for storing computer instructions; the processor executing the computer instructions to perform the method of any of the first aspects above.
In a fourth aspect, the present invention provides a computer readable storage medium, which when executed by a processor of an avatar generation device, causes the avatar generation device to perform the method of any one of the first aspects above.
According to the head portrait generation method, the head portrait generation device and the storage medium, the first face characteristics representing the face characteristics of the first user are determined from the face characteristics of the first user, the first face characteristics are amplified, and the cartoon head portrait is generated for the first user according to the amplified first face characteristics, so that the generated cartoon head portrait can highlight the face characteristics of the first user, and the first user can be identified better.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it will be obvious that the drawings in the following description are some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a schematic view of an application scenario of an avatar generation method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a first embodiment of an avatar generation method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a second embodiment of an avatar generation method according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a third embodiment of an avatar generation method according to the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an avatar generating apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an avatar generating apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is an application scenario schematic diagram of an avatar generation method according to an embodiment of the present invention. As shown in fig. 1, the application scenario may include a terminal 11 and a server 12. The user may select his avatar in the network application by using the terminal 11, the terminal 11 may upload the avatar selected by the user to the server 12, and the terminal 11 may be, for example, a mobile phone, a tablet computer, a notebook computer, etc. Here, the web application may refer to web-based software, games, products, etc. that are actually visible to be operable.
It should be noted that, the method for generating the head portrait provided in the embodiment of the present invention may be executed by the terminal 11, and after the terminal 11 generates the head portrait, the generated head portrait may be sent to the server 12 for storage; alternatively, the method for generating the avatar provided in the embodiment of the present invention may be executed by the server 12, and after the server 12 generates the avatar, further optionally, the server 12 may send the generated avatar to the terminal 11, and the generated avatar is presented to the user by the terminal 11; the method for generating the head portrait provided by the embodiment of the invention can be executed by other devices except the server 12 and the terminal 11, and after the head portrait is generated by the other devices, the generated head portrait can be sent to the server 12 for storage, and optionally, the head portrait can also be sent to the terminal 11, and the head portrait is presented to the user by the terminal 11.
Therefore, the terminal 11, the server 12, and other devices except the terminal 11 and the server 12 may execute the avatar generation method provided by the embodiment of the present invention.
Fig. 2 is a schematic flow chart of a first embodiment of an avatar generation method according to an embodiment of the present invention. As shown in fig. 2, the method of the present embodiment may include:
step 201, determining a first face feature representing the facial features of a first user from a plurality of face features of the first user.
In this step, the plurality of face features may include features of different face parts. Alternatively, at least two of the following may be included: eye features, eyebrow features, nose features, mouth features, ear features, face features, etc. The eye features may include eye shape features, eye break length features, and eye break width features; the eyebrow features may include, in particular, eyebrow shape features and eyebrow length features. Nose features may include, in particular, nose shape features and nose size features; the mouth features may include specifically mouth shape features, upper lip thickness features, and lower lip thickness features; the ear shape may specifically include an ear shape feature and an ear size feature.
Optionally, the plurality of face features of the first user may be compared with corresponding preset face features to determine a first face feature representing the facial features of the first user. Further optionally, when a similarity between one of the plurality of face features of the first user and a preset face feature corresponding to the face feature is less than or equal to a certain value, the face feature may be determined to be the first face feature. When the similarity between one face feature of the plurality of face features of the first user and the corresponding preset face feature of the face feature is smaller than or equal to a certain value, the face feature of the first user can be represented as distinctive, and the face feature of the first user can be represented as representative. Here, the face feature and the preset face feature corresponding to the face feature may be the same type of face feature, for example, both of the eye shape feature and the eye break length feature.
Specifically, a certain number (e.g., 150) of face key points of the face of the first user may be identified through a face recognition technology, and the plurality of face features of the first user may be obtained according to the certain number of key points. Optionally, the face image of the first user may be collected by a camera, and the collected face image of the first user may be identified by a face recognition technology, so as to identify a certain number of key points; or, a photo including the face image of the first user uploaded by the first user may be obtained, and the face image of the first user in the photo may be identified by a face recognition technology, so as to identify a certain number of key points.
It should be noted that the number of the first face features may be one or more.
Step 202, amplifying the first face feature, and generating a cartoon head image for the first user according to the amplified first face feature.
In this step, the zooming in on the first face feature may be understood as highlighting the first face feature. For example, assuming that the feature of the eye-break length of the first user is a first eye-break length, the feature of the eye shape is a first shape, the feature of the eye-break length is a first face feature of the first user, and the first eye-break length is smaller than a first preset eye-break length (may represent that the eyes are short as facial features of the first user), the enlarging the first face feature may specifically be reducing the eye-break length from the first eye-break length to a second eye-break length, where the second eye-break length is smaller than the first eye-break length; further, generating the cartoon head image according to the amplified first face feature may specifically include generating the cartoon head image according to the eye-break length being the second eye-break length and the eye shape being the first shape.
For another example, assuming that the feature of the eye-break length of the first user is a third eye-break length, the feature of the eye shape is a first shape, the feature of the eye-break length is a first face feature of the first user, and the third eye-break length is greater than a second preset eye-break length (may represent that the eye length is a facial feature of the first user), the amplifying the first face feature may specifically be amplifying the eye-break length from the third eye-break length to a fourth eye-break length, where the fourth eye-break length is greater than the third eye-break length; further, generating the cartoon head image according to the amplified first face feature may specifically include generating the cartoon head image according to the eye-break length being a fourth eye-break length and the eye shape being a first shape. It is understood that the second predetermined eye-break length is greater than the first predetermined eye-break length.
For another example, assuming that the first user's eye-break width is characterized by a first eye-break width, the eye-shape is characterized by a second eye-break width, and the first eye-break width is greater than a preset eye-break width (which may represent that the eye-break width is a facial feature of the first user), enlarging the first face feature may specifically keep the eye shape unchanged, and adjusting the eye-break width from the first eye-break width to the second eye-break width, the second eye-break width being greater than the first eye-break width; further, generating the cartoon head image according to the amplified first face feature may specifically include generating the cartoon head image according to the eye shape being a second shape and the eye break width being a second eye break width.
For another example, assuming that the face feature of the first user is a cone, the face feature is a first face feature of the first user (may represent that the face is a facial feature of the first user) and the cone angle is a first angle, the enlarging the first face feature may specifically be reducing the cone angle from the first angle to a second angle, where the second angle is smaller than the first angle; further, generating the cartoon head image according to the amplified first face feature may specifically include generating the cartoon head image according to the face shape being tapered and the included angle of the taper being a second angle.
Here, by generating a cartoon head image for the first user according to the enlarged first face feature, the generated cartoon head image may highlight the first face feature, that is, the generated cartoon head image may highlight the facial feature of the first user.
According to the head portrait generation method, the first face features representing the face features of the first user are determined from the face features of the first user, the first face features are amplified, and the cartoon head portrait is generated for the first user according to the amplified first face features, so that the generated cartoon head portrait can highlight the face features of the first user, and the first user can be identified better.
Fig. 3 is a schematic flow chart of a second embodiment of an avatar generation method according to an embodiment of the present invention. The present embodiment mainly describes an alternative implementation manner of determining the first face feature from a plurality of face features of the first user based on the embodiment shown in fig. 2. As shown in fig. 3, the method of the present embodiment may include:
step 301, determining a first face feature representing the facial features of a first user from a plurality of face features of the first user according to a face feature set.
In this step, the face feature set includes face features of a plurality of second users. The second user and the first user are different users. Optionally, a plurality of feature thresholds may be determined according to the face feature set, where the plurality of feature thresholds correspond to a plurality of face features of the first user, and the first face feature is determined by comparing each face feature of the plurality of face features of the first user with a feature threshold corresponding to the face feature. For example, assuming that, according to the face feature set, a feature threshold 1 and a feature threshold 2 may be determined, where the feature threshold 1 is greater than the feature threshold 2, and the feature threshold 1 and the feature threshold 2 both represent the eye-break length, when the face feature 1 of the first user representing the eye-break length is greater than the feature threshold 1, the face feature of the first user may be represented with the eye length, and thus the corresponding face feature 1 may be determined to be the first face feature; when the face feature 1 of the first user representing the eye break length is smaller than the feature threshold 2, the eye short may be represented as the face feature of the first user, and thus the corresponding face feature 1 may be determined as the first face feature.
Alternatively, the first face feature may be determined by comparing a plurality of face features of the first user with face features of a plurality of second users in the set of face features. Further optionally, step 301 may specifically include:
judging whether the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold value; the first number is the number of the second users, the similarity between the second users and the target face features of the first user in the face feature set is greater than or equal to a first similarity threshold, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold, the target face feature of the first user is the first face feature.
The first ratio threshold may be, for example, 10%. The first similarity threshold may be, for example, 20%.
Wherein the plurality of second users may be all users in the face feature set.
Step 302, amplifying the first face feature, and generating a cartoon head image for the first user according to the amplified first face feature.
In this step, optionally, a cartoon head image may be generated for the first user according to the third face feature and the amplified first face feature. The third face feature and the first face feature are face features of different parts. For example, the first face features may be eye features and face shape features. The third face feature may be a nose feature, an ear feature, an eyebrow feature, and a lip feature in addition to the eye feature and the face feature.
The third face feature may be determined according to face features of other users, for example, from face features with a user similarity of 90% being greater than 90% in the face feature set, the face features of the third face feature may be determined; alternatively, the third face feature may be a preset face feature.
Optionally, after the cartoon head is generated for the first user, a plurality of face features of the first user may be put into the face feature set.
According to the head portrait generation method, the first face features representing the face features of the first user are determined from the face features of the first user according to the face feature set, the face features of the second user are included in the face feature set, the first face features are amplified, and the cartoon head portrait is generated for the first user according to the amplified first face features, so that the generated cartoon head portrait can highlight the face features of the first user, and the first user can be identified better.
Fig. 4 is a schematic flow chart of a third embodiment of an avatar generation method according to an embodiment of the present invention. This embodiment mainly describes an alternative implementation of generating a cartoon for a first user based on the enlarged first face feature, based on the embodiment shown in fig. 3. As shown in fig. 4, the method of the present embodiment may include:
step 401, determining a first face feature and a second face feature from a plurality of face features of a first user according to a face feature set.
In this step, the second face feature and the first face feature are face features of different positions, and the first face feature is a face feature representing a face feature of the first user. For a specific manner of determining the first face feature from the plurality of face features of the first user according to the face feature set, reference may be made to the related description of the embodiment shown in fig. 3, which is not repeated herein.
Optionally, a plurality of feature thresholds may be determined according to the face feature set, where the plurality of feature thresholds correspond to a plurality of face features of the first user, and the first face feature is determined by comparing each face feature of the plurality of face features of the first user with a feature threshold corresponding to the face feature. For example, assuming that, according to the face feature set, a feature threshold 1 and a feature threshold 2 may be determined, where the feature threshold 1 is greater than the feature threshold 2, and both the feature threshold 1 and the feature threshold 2 represent the eye-break length, when the face feature 1 of the first user representing the eye-break length is smaller than the feature threshold 1, it may be indicated that the eye-break length is not the facial feature of the first user, so it may be determined that the corresponding face feature 1 is the second face feature; when the face feature 1 of the first user indicating the eye-break length is greater than the feature threshold 2, it may indicate that the eye-break length is not the facial feature of the first user, and thus it may be determined that the corresponding face feature 1 is the second face feature.
Alternatively, the second face features may be determined by comparing the plurality of face features of the first user with the plurality of face features of the second user in the set of face features. Further optionally, the determining, according to the face feature set, a second face feature from the plurality of face features of the first user may specifically include:
judging whether the ratio of the third quantity to the second quantity is larger than or equal to a second ratio threshold value; the third number is the number of the second users, the similarity with the target face features of the first user in the face feature set is greater than or equal to a second similarity threshold, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the third quantity to the second quantity is greater than or equal to a second ratio threshold, the target face feature of the first user is the second face feature.
Alternatively, the second similarity threshold may be greater than the first similarity threshold described above.
The first ratio threshold may be, for example, 90%. The second similarity threshold may be, for example, 80%.
Step 402, enlarging the first face feature, and reducing the second face feature.
In this step, for a specific way of amplifying the first face feature, reference may be made to the related description of the embodiment shown in fig. 2, which is not repeated herein.
The enlargement of the second face feature may be understood as weakening the second face feature. For example, assuming that the feature of the eye-break length of the first user is a first eye-break length, the feature of the eye shape is a first shape, the feature of the eye-break length and the feature of the eye shape are both the feature of the second face of the first user, and the first size is smaller than the preset size, the shrinking of the feature of the first face may specifically be to increase the eye-break length from the first eye-break length to a second eye-break length, where the second eye-break length is greater than the first eye-break length and the second eye-break length is equal to or greater than the preset eye-break length; further, generating the cartoon head image according to the reduced second face feature may specifically include generating the cartoon head image according to the eye-break length being the second eye-break length and the eye shape being the first shape.
For another example, assuming that the feature of the eye-break length of the first user is a third eye-break length, the feature of the eye shape is a first shape, the feature of the eye-break length and the feature of the eye shape are both the second face feature of the first user, and the third eye-break length is greater than the preset eye-break length, the amplifying the second face feature may specifically be reducing the eye-break length from the third eye-break length to a fourth eye-break length, where the fourth eye-break length is less than the third eye-break length and the fourth eye-break length is greater than or equal to the preset eye-break length; further, generating the cartoon head image according to the reduced second face feature may specifically include generating the cartoon head image according to the eye-break length being a fourth eye-break length and the eye shape being the first shape.
Here, by narrowing the second face feature, the second face feature may be weakened, so that the first face feature may be further emphasized, i.e. the facial features of the first user may be further emphasized.
Step 403, generating a cartoon head image for the first user according to the enlarged first face feature and the reduced second face feature.
In this step, a cartoon head image is generated for the first user according to the reduced second face feature and the enlarged first face feature, so that the generated cartoon head image can highlight the first face feature and weaken the second face feature, and further highlight the facial features of the first user.
According to the head portrait generating method, the first face feature and the second face feature are determined from the face features of the first user according to the face feature set, the first face feature is amplified, the second face feature is reduced, and the cartoon head portrait is generated for the first user according to the amplified first face feature and the reduced second face feature, so that the generated cartoon head portrait can highlight the face features of the first user, and the first user can be identified better.
Fig. 5 is a schematic structural diagram of an avatar generating apparatus according to an embodiment of the present invention, where the apparatus provided in this embodiment may be applied to the foregoing method embodiment to implement functions of the terminal 11, the server 12, or other devices. As shown in fig. 5, the apparatus of this embodiment may include: a determining module 51 and a generating module 52. Wherein,,
a determining module 51, configured to determine, from a plurality of face features of a first user, a first face feature representing a facial feature of the first user;
the generating module 52 is configured to amplify the first face feature determined by the determining module 21, and generate a cartoon head image for the first user according to the amplified first face feature.
In one possible implementation, the determining module 51 is specifically configured to:
according to the face feature set, determining a first face feature representing the face feature of the first user from a plurality of face features of the first user; the face feature set comprises a plurality of face features of the second user.
In one possible implementation, the determining module 51 is configured to determine, from a set of facial features, a first facial feature representing a facial feature of the first user from a plurality of facial features of the first user, and specifically includes:
Judging whether the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold value; the first number is the number of the second users, the similarity with the target face features of the first user in the face feature set is larger than or equal to a first similarity threshold value, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold, the target face feature of the first user is the first face feature.
In a possible implementation, the determining module 51 is further configured to:
determining a second face feature from a plurality of face features of the first user according to the face feature set, wherein the second face feature and the first face feature are face features of different parts;
the generating module 52 is further configured to reduce the second face feature;
the generating module 52 is configured to generate a cartoon for the first user according to the amplified first face feature, and specifically includes: and generating a cartoon head image for the first user according to the amplified first face feature and the contracted second face feature.
In one possible implementation, the determining module 51 is configured to determine, from the face feature set, a second face feature from a plurality of face features of the first user, and specifically includes:
judging whether the ratio of the third quantity to the second quantity is larger than or equal to a second ratio threshold value; the third number is the number of the second users, the similarity with the target face features of the first user in the face feature set is greater than or equal to a second similarity threshold, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the third quantity to the second quantity is greater than or equal to a second ratio threshold, the target face feature of the first user is the second face feature.
In one possible implementation, the generating module 52 is configured to generate a cartoon for the first user according to the enlarged first face feature, and specifically includes:
and generating cartoon head images for the first user according to the third face features and the amplified first face features, wherein the third face features and the first face features are face features of different parts.
In one possible implementation, the third face feature is a preset face feature.
The device of the present embodiment may be used to implement the technical solution of the embodiment shown in the foregoing method, and its implementation principle and technical effects are similar, and are not repeated here.
Fig. 6 is a schematic structural diagram of an avatar generating apparatus according to an embodiment of the present invention, and as shown in fig. 6, the avatar generating apparatus may include: a processor 61 and a memory 62 for storing computer instructions.
Wherein processor 61 executes the computer instructions to perform the method of:
determining a first face feature representing the facial features of a first user from a plurality of face features of the first user;
amplifying the first face features, and generating cartoon head images for the first user according to the amplified first face features.
In one possible implementation, the determining, from a plurality of face features of a first user, a first face feature representing a face feature of the first user includes:
according to the face feature set, determining a first face feature representing the face feature of the first user from a plurality of face features of the first user; the face feature set comprises a plurality of face features of the second user.
In one possible implementation, the determining, according to the face feature set, a first face feature representing a facial feature of the first user from a plurality of face features of the first user includes:
judging whether the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold value; the first number is the number of the second users, the similarity with the target face features of the first user in the face feature set is larger than or equal to a first similarity threshold value, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold, the target face feature of the first user is the first face feature.
In one possible implementation, the method further comprises:
determining a second face feature from a plurality of face features of the first user according to the face feature set, wherein the second face feature and the first face feature are face features of different parts;
shrinking the second face features;
The generating a cartoon head image for the first user according to the amplified first face feature comprises the following steps:
and generating a cartoon head image for the first user according to the amplified first face feature and the contracted second face feature.
In one possible implementation, the determining a second face feature from the face features of the first user according to the face feature set includes
Judging whether the ratio of the third quantity to the second quantity is larger than or equal to a second ratio threshold value; the third number is the number of the second users, the similarity with the target face features of the first user in the face feature set is greater than or equal to a second similarity threshold, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the third quantity to the second quantity is greater than or equal to a second ratio threshold, the target face feature of the first user is the second face feature.
In one possible implementation, the generating a cartoon head image for the first user according to the enlarged first face feature includes:
And generating cartoon head images for the first user according to the third face features and the amplified first face features, wherein the third face features and the first face features are face features of different parts.
In one possible implementation, the third face feature is a preset face feature.
The embodiment of the present invention also provides a computer-readable storage medium, which when executed by a processor of an avatar generation device, causes the avatar generation device to perform an avatar generation method comprising:
determining a first face feature representing the facial features of a first user from a plurality of face features of the first user;
amplifying the first face features, and generating cartoon head images for the first user according to the amplified first face features.
In one possible implementation, the determining, from a plurality of face features of a first user, a first face feature representing a face feature of the first user includes:
according to the face feature set, determining a first face feature representing the face feature of the first user from a plurality of face features of the first user; the face feature set comprises a plurality of face features of the second user.
In one possible implementation, the determining, according to the face feature set, a first face feature representing a facial feature of the first user from a plurality of face features of the first user includes:
judging whether the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold value; the first number is the number of the second users, the similarity with the target face features of the first user in the face feature set is larger than or equal to a first similarity threshold value, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold, the target face feature of the first user is the first face feature.
In one possible implementation, the method further comprises:
determining a second face feature from a plurality of face features of the first user according to the face feature set, wherein the second face feature and the first face feature are face features of different parts;
shrinking the second face features;
The generating a cartoon head image for the first user according to the amplified first face feature comprises the following steps:
and generating a cartoon head image for the first user according to the amplified first face feature and the contracted second face feature.
In one possible implementation, the determining a second face feature from the face features of the first user according to the face feature set includes
Judging whether the ratio of the third quantity to the second quantity is larger than or equal to a second ratio threshold value; the third number is the number of the second users, the similarity with the target face features of the first user in the face feature set is larger than or equal to a first similarity threshold value, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the third quantity to the second quantity is greater than or equal to a second ratio threshold, the target face feature of the first user is the second face feature.
In one possible implementation, the generating a cartoon head image for the first user according to the enlarged first face feature includes:
And generating cartoon head images for the first user according to the third face features and the amplified first face features, wherein the third face features and the first face features are face features of different parts.
In one possible implementation, the third face feature is a preset face feature.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (8)

1. A head portrait generation method, comprising:
according to a face feature set, determining a first face feature representing the face feature of a first user from a plurality of face features of the first user; the face feature set comprises face features of a plurality of second users;
the determining a first facial feature representative of facial features of the first user comprises: judging whether the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold value; the number of the second users, in the first number of face feature sets, of which the similarity with the target face features of the first user is greater than or equal to a first similarity threshold, and the target face features traverse a plurality of face features of the first user; the second number is the total number of the second users in the face feature set, wherein the face feature set comprises face features of a plurality of second users; if the ratio of the first number to the second number is smaller than or equal to a first ratio threshold, the target face feature of the first user is the first face feature;
amplifying the first face features, and generating cartoon head images for the first user according to the amplified first face features.
2. The method according to claim 1, wherein the method further comprises:
determining a second face feature from a plurality of face features of the first user according to the face feature set, wherein the second face feature and the first face feature are face features of different parts;
shrinking the second face features;
the generating a cartoon head image for the first user according to the amplified first face feature comprises the following steps:
and generating a cartoon head image for the first user according to the amplified first face feature and the contracted second face feature.
3. The method of claim 2, wherein the determining a second face feature from the plurality of face features of the first user based on the set of face features comprises:
judging whether the ratio of the third quantity to the second quantity is larger than or equal to a second ratio threshold value; the third number is the number of the second users, the similarity with the target face features of the first user in the face feature set is greater than or equal to a second similarity threshold, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
And if the ratio of the third quantity to the second quantity is greater than or equal to a second ratio threshold, the target face feature of the first user is the second face feature.
4. The method of any of claims 1-3, wherein the generating a caricature head portrait for the first user from the enlarged first face feature comprises:
and generating cartoon head images for the first user according to the third face features and the amplified first face features, wherein the third face features and the first face features are face features of different parts.
5. The method of claim 4, wherein the third face feature is a preset face feature.
6. An avatar generation device, comprising:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a first face characteristic representing the face characteristics of a first user from a plurality of face characteristics of the first user;
the generation module is used for amplifying the first face characteristics determined by the determination module and generating cartoon head images for the first user according to the amplified first face characteristics;
the determining module is specifically configured to:
According to the face feature set, determining a first face feature representing the face feature of the first user from a plurality of face features of the first user; the face feature set comprises face features of a plurality of second users;
the determining module is configured to determine, according to a face feature set, a first face feature representing a face feature of the first user from a plurality of face features of the first user, and specifically includes:
judging whether the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold value; the first number is the number of the second users, the similarity with the target face features of the first user in the face feature set is larger than or equal to a first similarity threshold value, and the target face features traverse a plurality of face features of the first user; the second number is the total number of second users in the face feature set;
and if the ratio of the first quantity to the second quantity is smaller than or equal to a first ratio threshold, the target face feature of the first user is the first face feature.
7. An avatar generation device, comprising:
A processor and a memory for storing computer instructions; the processor executing the computer instructions to perform the method of any of claims 1-5.
8. A computer readable storage medium, characterized in that instructions in the storage medium, when executed by a processor of an avatar generation device, enable the avatar generation device to perform the method of any one of claims 1-5.
CN201910130415.3A 2019-02-21 2019-02-21 Head portrait generation method, device and storage medium Active CN109993807B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910130415.3A CN109993807B (en) 2019-02-21 2019-02-21 Head portrait generation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910130415.3A CN109993807B (en) 2019-02-21 2019-02-21 Head portrait generation method, device and storage medium

Publications (2)

Publication Number Publication Date
CN109993807A CN109993807A (en) 2019-07-09
CN109993807B true CN109993807B (en) 2023-05-30

Family

ID=67130297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910130415.3A Active CN109993807B (en) 2019-02-21 2019-02-21 Head portrait generation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN109993807B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112231669A (en) * 2020-09-25 2021-01-15 上海淇毓信息科技有限公司 Page display method and device based on facial recognition and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440590B1 (en) * 2002-05-21 2008-10-21 University Of Kentucky Research Foundation System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns
EP3147827A1 (en) * 2015-06-24 2017-03-29 Samsung Electronics Co., Ltd. Face recognition method and apparatus
CN106682632A (en) * 2016-12-30 2017-05-17 百度在线网络技术(北京)有限公司 Method and device for processing face images

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000063344A (en) * 2000-06-26 2000-11-06 김성호 Facial Caricaturing method
US7483553B2 (en) * 2004-03-29 2009-01-27 Microsoft Corporation Caricature exaggeration
US7660482B2 (en) * 2004-06-23 2010-02-09 Seiko Epson Corporation Method and apparatus for converting a photo to a caricature image
CN101477696B (en) * 2009-01-09 2011-04-13 苏州华漫信息服务有限公司 Human character cartoon image generating method and apparatus
CN104463779A (en) * 2014-12-18 2015-03-25 北京奇虎科技有限公司 Portrait caricature generating method and device
CN107730573A (en) * 2017-09-22 2018-02-23 西安交通大学 A kind of personal portrait cartoon style generation method of feature based extraction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440590B1 (en) * 2002-05-21 2008-10-21 University Of Kentucky Research Foundation System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns
EP3147827A1 (en) * 2015-06-24 2017-03-29 Samsung Electronics Co., Ltd. Face recognition method and apparatus
CN106682632A (en) * 2016-12-30 2017-05-17 百度在线网络技术(北京)有限公司 Method and device for processing face images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing;Jessica L. Irons 等;《Vision Research》;第137卷;61-79 *

Also Published As

Publication number Publication date
CN109993807A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
US11973732B2 (en) Messaging system with avatar generation
WO2016180224A1 (en) Method and device for processing image of person
US9667860B2 (en) Photo composition and position guidance in a camera or augmented reality system
US8130281B2 (en) Information processing apparatus, eye open/closed degree determination method, computer-readable storage medium, and image sensing apparatus
JP2019117646A (en) Method and system for providing personal emotional icons
US8041076B1 (en) Generation and usage of attractiveness scores
WO2019109758A1 (en) Video image processing method and device
US11132544B2 (en) Visual fatigue recognition method, visual fatigue recognition device, virtual reality apparatus and storage medium
KR20210042952A (en) Image processing method and device, electronic device and storage medium
US10255487B2 (en) Emotion estimation apparatus using facial images of target individual, emotion estimation method, and non-transitory computer readable medium
KR20140138798A (en) System and method for dynamic adaption of media based on implicit user input and behavior
CN112527115B (en) User image generation method, related device and computer program product
JP2013097760A (en) Authentication system, terminal device, authentication program, and authentication method
CN108596079B (en) Gesture recognition method and device and electronic equipment
CN105430269B (en) A kind of photographic method and device applied to mobile terminal
KR102229034B1 (en) Apparatus and method for creating information related to facial expression and apparatus for creating facial expression
WO2020062671A1 (en) Identity identification method, computer-readable storage medium, terminal device, and apparatus
JP2005149370A (en) Imaging device, personal authentication device and imaging method
WO2023034251A1 (en) Spoof detection based on challenge response analysis
CN109993807B (en) Head portrait generation method, device and storage medium
JP2019074938A (en) Device, system, method and program for communication relay
CN109587035B (en) Head portrait display method and device of session interface, electronic equipment and storage medium
JP2006133941A (en) Image processing device, image processing method, image processing program, and portable terminal
CN111010526A (en) Interaction method and device in video communication
JP2021064043A (en) Image processing device, image processing system, image processing method and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant