CN113724366B - 3D model generation method, device and equipment - Google Patents

3D model generation method, device and equipment Download PDF

Info

Publication number
CN113724366B
CN113724366B CN202010449523.XA CN202010449523A CN113724366B CN 113724366 B CN113724366 B CN 113724366B CN 202010449523 A CN202010449523 A CN 202010449523A CN 113724366 B CN113724366 B CN 113724366B
Authority
CN
China
Prior art keywords
color
hairstyle
target user
model
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010449523.XA
Other languages
Chinese (zh)
Other versions
CN113724366A (en
Inventor
傅艳
娄心怡
朱嘉
宋莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202010449523.XA priority Critical patent/CN113724366B/en
Publication of CN113724366A publication Critical patent/CN113724366A/en
Application granted granted Critical
Publication of CN113724366B publication Critical patent/CN113724366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a 3D model generation method, a device and equipment. The 3D model generation method comprises the following steps: acquiring a face image of a target user; generating a target user 3D model according to the face image; obtaining pupil colors of the 3D model of the target user, and matching corresponding color development according to the pupil colors; acquiring head characteristics of the 3D model of the target user, and matching a corresponding hairstyle according to the head characteristics; and displaying the matched hairstyle and haircolor on the 3D model of the target user. According to the scheme provided by the application, the proper hairstyle and color can be accurately recommended for the user.

Description

3D model generation method, device and equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for generating a 3D model.
Background
Along with the continuous improvement of living standard, people pay more attention to the improvement of physical life and the improvement of mental level, so people pay more attention to clothing and dressing. In fashion life trend, proper hairstyle and hair color can greatly improve the image and the gas quality of individuals, and also can improve the self-confidence of the individuals in front of other people.
In the related art, a 2D hair style picture is generally provided to a user for the user to select, and the user can select a hair style and a hair color in advance through the 2D hair style picture provided by a barber shop, but conditions of a facial feature, a facial form and the like of the user may not be the same as a model in the hair style picture, and the hair style of the model is not necessarily suitable for the user, so that only the hair is trimmed, the effect can be seen. It can be seen that the 2D hair style picture does not reflect the effect of the user after applying the recommended hair style and hair color.
Therefore, the method in the related art cannot accurately recommend a proper hairstyle and color to the user through the 2D hairstyle picture.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a 3D model generating method, device and equipment, which can accurately recommend proper hairstyles and colors for users.
The first aspect of the application provides a 3D model generating method, which includes:
acquiring a face image of a target user;
generating a target user 3D model according to the face image;
obtaining pupil colors of the 3D model of the target user, and matching corresponding color development according to the pupil colors;
acquiring head characteristics of the 3D model of the target user, and matching a corresponding hairstyle according to the head characteristics;
And displaying the matched hairstyle and haircolor on the 3D model of the target user.
In one embodiment, the head features include facial shapes and facial features;
the step of matching the corresponding hairstyle according to the head characteristics comprises the following steps:
and matching the corresponding hairstyle according to the facial features and the corresponding hairstyle length.
In one embodiment, the facial features include: facial contour features and facial appearance features;
the facial profile features include at least one of a temple concavity, a zygomatic arch convexity, and a mandibular angular width;
the facial features include at least one of eyebrow features, nose features, and mouth features.
In one embodiment, the head features further comprise a head form;
the step of matching the corresponding hairstyle according to the head characteristics comprises the following steps:
and matching the corresponding hairstyle length according to the facial features and the corresponding hairstyle style according to the head shape.
In one embodiment, the method further comprises: acquiring neck features and/or shoulder features of the target user 3D model;
and when the corresponding hairstyle is matched according to the head characteristics, the neck characteristics and/or the shoulder characteristics are combined at the same time to match the hairstyle.
In one embodiment, the method further comprises: the freckle characteristics and/or acne mark characteristics of the 3D model face of the target user are obtained;
and when the corresponding color is matched according to the pupil color, matching the color by combining the freckle characteristics and/or the acne mark characteristics.
In one embodiment, the obtaining pupil color of the target user 3D model includes:
and determining the pupil color of the target user by acquiring key feature points, color values and brightness values of an eye area of the 3D model of the target user and performing color analysis according to a set algorithm.
In one embodiment, the displaying the matched hairstyle and hair color on the target user 3D model includes:
when the matched hairstyle and hair color are two or more, the hairstyle and hair color of the 3D model of the target user can be switched, and the switching comprises automatic switching or manual switching according to the selection of the user.
A second aspect of the present application provides a 3D model generating apparatus, including:
the image acquisition module is used for acquiring a face image of a target user;
the model generation module is used for generating a target user 3D model according to the face image acquired by the image acquisition module;
The color matching module is used for acquiring the pupil color of the target user 3D model generated by the model generation module and matching the corresponding color according to the pupil color;
the hairstyle matching module is used for acquiring the head characteristics of the 3D model of the target user generated by the model generating module and matching the corresponding hairstyle according to the head characteristics;
and the hairstyle display module is used for displaying the matched hairstyle and haircolor on the 3D model of the target user generated by the model generation module.
In one embodiment, the hairstyle matching module comprises:
the facial form processing submodule is used for matching the corresponding hairstyle length according to the facial form in the head characteristic;
and the facial feature processing sub-module is used for matching a corresponding hairstyle style according to the facial features in the head features.
In one embodiment, the hairstyle matching module further comprises:
and the head type processing sub-module is used for matching the corresponding hairstyle type according to the head type in the head characteristic.
A third aspect of the present application provides an electronic device, comprising:
a processor; and
and a memory having executable code stored thereon which, when executed by the processor, causes the processor to perform the method described above.
A fourth aspect of the present application provides a non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the method described above.
The technical scheme that this application provided can include following beneficial effect:
according to the scheme, after the face image of the target user is obtained, a target user 3D model is generated according to the face image, then pupil colors of the target user 3D model are obtained, head features of the target user 3D model are obtained, then corresponding hair colors are matched according to the pupil colors, corresponding hair styles are matched according to the head features, and finally the matched hair styles and hair colors are displayed on the target user 3D model. Because the 3D model of the target user is generated according to the face image of the user, and the three-dimensional effect of the 3D model of the target user is more attached to the actual situation of the target user, the actual appearance characteristics of the target user can be accurately reflected, and therefore the exchange experience of attaching the actual effect can be selected for the target user based on the 3D model of the target user. Furthermore, according to the color matching corresponding to the pupil color of the 3D model of the target user and the hairstyle matching corresponding to the head characteristic of the 3D model of the target user, the recommendation result can be more matched with the actual appearance characteristic of the user, the recommendation is more accurate and more vivid, the user has more reference value, and the user experience can be improved.
According to the scheme, the head features can comprise facial shapes and facial features, and the corresponding hairstyle length can be matched according to the facial shapes, and the corresponding hairstyle style is matched according to the facial features; wherein the facial profile features include at least one of a temple depression, a zygomatic arch bulge, and a mandibular angular width, and the facial appearance features include at least one of a brow feature, a nose feature, and a mouth feature. Therefore, a plurality of finer characteristics of the user can be comprehensively considered, so that the recommended hairstyle and color are more accurate.
According to the scheme, when the corresponding hairstyle is matched, the head shape, the neck feature or the shoulder feature can be further referred to, so that the hairstyle and the hair color are more comprehensively considered when the hairstyle and the hair color are recommended, and the recommendation is more accurate.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flow chart of a 3D model generating method according to an embodiment of the present application;
FIG. 2 is another flow chart of a 3D model generation method according to an embodiment of the present disclosure;
FIG. 3 is another flow chart of a 3D model generation method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of determining a temple recess in a 3D model generation method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a 3D model generating device according to an embodiment of the present application;
fig. 6 is another schematic structural diagram of the 3D model generating apparatus shown in the embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
With the advent of the "color value age", beautiful display and pursuit have been made so that people attach importance to clothing and makeup, and proper hairstyles and hair colors help to improve personal images. In the related art, only by generating 2D image information to recommend the hairstyle to the user, a proper hairstyle cannot be accurately recommended to the user, and further, color selection cannot be realized. In view of the above problems, the embodiments of the present application provide a 3D model generating method, which can accurately recommend a proper hairstyle and color for a user. Specifically, the following describes in detail the technical solution of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flow chart of a 3D model generating method according to an embodiment of the present application.
Referring to fig. 1, the method flow includes:
in step S101, a face image of a target user is acquired.
The face image of the target user is obtained, which may be a picture of the front face and/or the side face of the user. Specifically, the user can be prompted to move the head to a shooting area, and a camera of the terminal acquires a front face picture of the user; and then, the user rotates the head leftwards and rightwards according to the prompt, and the camera of the terminal acquires the side face picture of the user, so that the face image of the target user is acquired, and a target user 3D model can be generated according to the acquired image picture information.
It should be noted that, the face image of the target user may also be obtained by directly obtaining a front face image and two side face images uploaded by the user.
In step S102, a target user 3D model is generated from the face image.
Generating the target user 3D model from the face image may be generating the target user 3D model from a picture of the user's front face and/or side faces.
In step S103, pupil colors of the 3D model of the target user are acquired, and corresponding color development is matched according to the pupil colors.
The pupil color of the 3D model of the target user can be obtained by obtaining key feature points, color values and brightness values of an eye area of the 3D model of the target user, and performing color analysis according to a set algorithm to determine the pupil color of the user. After the pupil color of the user is determined, the corresponding color can be matched according to the mapping relation between the pupil color and the color.
The key feature points may include, for example, white, eyeball, iris, and other feature points. Wherein, the iris can adjust the size of the pupil, and the iris contains pigment with the largest pigment amount, namely black eyes; the pigment amount is large, and the eyes are brown generally; the least amount of pigment is typically blue eye. An image of the above feature points is acquired, wherein the image of the above feature points can be expressed by pixels, and various colors can be described by calculating RGB (Red, green, blue, red, green, blue) average values of the respective pixels. Specifically, the color of each region of the eye can be calculated from the RGB average value of each pixel, and the color of the pupil can be finally determined from the calculated color of each region of the eye. The setting algorithm may be, for example, but not limited to, a color feature extraction algorithm. The color feature extraction algorithm may be, for example, a color histogram method or a wavelet-based segmented image color feature extraction method, wherein the specific operation mode of the color feature extraction may be implemented by various means, which is not described in detail herein.
In step S104, the head features of the 3D model of the target user are acquired, and the corresponding hairstyle is matched according to the head features.
Specifically, the head features of the 3D model of the target user include a face shape and a face feature, and the corresponding hairstyle length can be matched according to the face shape, and the corresponding hairstyle style can be matched according to the face feature. Facial features may include facial contour features, which may include: the characteristics of the contour frame such as the concave temples, the convex cheekbones and the wide angles of the lower jaw are matched with the corresponding hairstyle according to the mapping relation between the facial contour characteristics and the hairstyle.
The step S103 and the step S104 do not have a necessary sequence relationship.
In step S105, the matched hairstyle and color are displayed on the target user 3D model.
When the matched hairstyle and hair color are two or more, the hairstyle and hair color of the 3D model of the target user can be switched, and the switching mode comprises automatic switching or manual switching according to the selection of the user. Through setting the switching mode, the target user can screen and compare various hairstyles and colors, more selection suggestions are provided for the target user, and then the target user determines a final result according to own preference, so that experience and functionality are improved.
In this embodiment, after a face image of a target user is acquired, a target user 3D model is generated according to the face image, then pupil colors of the target user 3D model are acquired, head features of the target user 3D model are acquired, then corresponding hairstyles are matched according to the pupil colors, corresponding hairstyles are matched according to the head features, and finally the matched hairstyles and haircolors are displayed on the target user 3D model. Because the 3D model of the target user is generated according to the face image of the user, and the three-dimensional effect of the 3D model of the target user is more attached to the actual situation of the target user, the actual appearance characteristics of the target user can be accurately reflected, and therefore the exchange experience of attaching the actual effect can be selected for the target user based on the 3D model of the target user. Furthermore, according to the color matching corresponding to the pupil color of the 3D model of the target user and the hairstyle matching corresponding to the head characteristic of the 3D model of the target user, the recommendation result can be more matched with the actual appearance characteristic of the user, the recommendation is more accurate and more vivid, the user has more reference value, and the user experience can be improved.
Fig. 2 is another flow chart of a 3D model generating method according to an embodiment of the present application. Fig. 2 may match a corresponding hairstyle according to facial shapes and facial features, which are illustrated as facial appearance features.
Referring to fig. 2, the method flow includes:
in step S201, a face image of a target user is acquired.
The face image of the target user is obtained, which may be a picture of the front face and/or the side face of the user. Specifically, the user can be prompted to move the head to a shooting area, and a camera of the terminal acquires a front face picture of the user; and then, the user rotates the head leftwards and rightwards according to the prompt, and the camera of the terminal acquires the side face picture of the user, so that the face image of the target user is acquired, and a target user 3D model can be generated according to the acquired image picture information.
It should be noted that, the face image of the target user may also be obtained by directly obtaining a front face image and two side face images uploaded by the user.
In step S202, a target user 3D model is generated from the face image.
Generating the target user 3D model from the face image may be generating the target user 3D model from a picture of the user's front face and/or side faces.
In step S203, pupil colors of the 3D model of the target user are acquired, and corresponding color development is matched according to the pupil colors.
The pupil color of the 3D model of the target user can be obtained by obtaining key feature points, color values and brightness values of an eye area of the 3D model of the target user, and performing color analysis according to a set algorithm to determine the pupil color of the user. After the pupil color of the user is determined, the corresponding color can be matched according to the mapping relation between the pupil color and the color.
The key feature points may include, for example, white, eyeball, iris, and other feature points. Wherein, the iris can adjust the size of the pupil, and the iris contains pigment with the largest pigment amount, namely black eyes; the pigment amount is large, and the eyes are brown generally; the least amount of pigment is typically blue eye. An image of the above feature points is acquired, wherein the image of the above feature points can be expressed by pixels, and various colors can be described by calculating RGB (Red, green, blue, red, green, blue) average values of the respective pixels. Specifically, the color of each region of the eye can be calculated from the RGB average value of each pixel, and the color of the pupil can be finally determined from the calculated color of each region of the eye.
The setting algorithm may be, for example, but not limited to, a color feature extraction algorithm. The color feature extraction algorithm may be, for example, a color histogram method or a wavelet-based segmented image color feature extraction method, wherein the specific operation mode of the color feature extraction may be implemented by various means, which is not described in detail herein.
In step S204, the face shape and the facial features of the 3D model of the target user are acquired, and the corresponding hairstyle is matched according to the face shape and the facial features.
The corresponding hairstyle is matched according to the facial shape and the facial features, and the corresponding hairstyle can be matched according to the length of the hairstyle matched according to the facial shape and the facial features. Facial features may include facial features, which may be eyebrow-type features, nasal-type features, mouth-type features, or the like. The nose type may generally include a narrow nose, a middle nose, a wide nose, and the like. The mouth shape may generally include standard lip shape, thin lip, thick lip, and the like. Taking the facial feature as an example of the eyebrow shape feature, the corresponding hairstyle length can be matched according to the facial shape, and the corresponding hairstyle style can be matched according to the eyebrow shape feature. Wherein the brow-shaped features may include at least one or a combination of the following features: the eyebrow shape, the eyebrow color and the eyebrow shade can be matched with the corresponding hairstyle according to the mapping relation between the eyebrow shape and the hairstyle according to the specific eyebrow shape.
It should be noted that step S203 and step S204 do not have a necessary sequence relationship.
In step S205, the matched hairstyle and color are displayed on the target user 3D model.
When the matched hairstyle and hair color are two or more, the hairstyle and hair color of the 3D model of the target user can be switched, and the switching mode comprises automatic switching or manual switching according to the selection of the user. Through setting the switching mode, the target user can screen and compare various hairstyles and colors, more selection suggestions are provided for the target user, and then the target user determines a final result according to own preference, so that experience and functionality are improved.
In this embodiment, the corresponding hairstyle length may be matched according to the face shape of the user, and the corresponding hairstyle style may be matched according to the facial profile features such as the eyebrow shape features. Therefore, more characteristics of the user can be comprehensively considered, so that the recommended hairstyle and color are more accurate.
Fig. 3 is another flow chart of a 3D model generating method according to an embodiment of the present application. Fig. 3 describes the solution of the present application in more detail with respect to fig. 1 and 2, also taking into account more features in combination.
In this embodiment, a 3D model of the target user of the user may be generated through a face image of the user, and the hairstyle and the haircolor may be recommended according to the pupil color, the head feature, the neck feature, the shoulder feature, and the like. Wherein the head features may include head shape, face shape, facial features, the facial features may include facial contour features and facial appearance features, the facial contour features may include at least one of temple concavity, zygomatic arch convexity, mandibular angle width, and the facial appearance features may include at least one of eyebrow shape features, nose shape features, and mouth shape features. Wherein the hairstyle may include aspects of hairstyle length and hairstyle style, among others.
Referring to fig. 3, the method flow includes:
in step S301, a face image of a target user is acquired.
The face image of the target user is obtained, which may be a picture of the front face and/or the side face of the user. Specifically, a user is prompted to move the head to a shooting area, and a camera of the terminal acquires a front face picture of the user; and then, the user rotates the head leftwards and rightwards according to the prompt, and the camera of the terminal acquires the side face picture of the user, so that the face image of the target user is acquired, and a target user 3D model can be generated according to the acquired image picture information.
More specifically, the embodiment may acquire a face picture taken by a user through a camera of the terminal device, or acquire a face picture directly uploaded by the user, where the face picture generally includes a complete frontal facial region, a lateral cheek region, an ear, and the like, and may include a picture of a frontal face and/or a lateral face, and the like. The terminal device may be a mobile phone, a tablet, etc. The shooting by the camera can be that the camera shoots the video firstly and then intercepts the picture in the video; or, the photo is taken directly by the camera. The camera can be a 3D camera or a 2D camera, three-dimensional space coordinates of each point in space in a visual field can be acquired through the 3D camera, and three-dimensional imaging can be acquired through an algorithm in a resetting mode; whereas the two-dimensional spatial coordinates, i.e., (x, y) coordinates, of each point in the image are typically acquired by a 2D camera.
It should be noted that, the face image of the target user may also be obtained by directly obtaining a front face image and two side face images uploaded by the user.
In step S302, a target user 3D model is generated from the face image.
Generating the target user 3D model from the face image the target user 3D model may be generated from pictures of the front face and/or side faces of the user.
After the user face picture of the user is obtained, a related 3D face recognition algorithm can be adopted to generate a target user 3D model according to the key feature points of the face. The related 3D face recognition algorithm may be, for example, an algorithm based on image features, an algorithm based on model variable parameters, or an algorithm based on deep learning, which is generally used in related technologies, and the disclosure is not limited thereto. The key feature points of the face may include, but are not limited to, feature points such as eyes, pupils, nose, mouth, ears, eyebrows, and contour points of various parts of the face. Taking an algorithm based on model variable parameters as an example, the algorithm combines the 3D deformation of a general face model and the minimum matrix iteration based on distance mapping to restore the head pose and the 3D face. And continuously updating the attitude parameters along with the change of the association relation of the model deformation, and repeating the process until the minimum scale reaches the requirement. The biggest difference between the model variable parameter-based algorithm and the image feature-based algorithm is that: the latter needs to search the coordinates of the feature points again after each change of the face pose, and the former only needs to adjust the parameters of the 3D deformation model.
In step S303, the face shape of the 3D model of the target user is determined, and the hairstyle length of the user with the face shape matching is determined according to the mapping relation between the preset face shape and the hairstyle length.
The present embodiment can determine the face shape from data such as aspect ratio of face, cheek-arch mandibular ratio, face contour point variance, and the like.
The present embodiment can set the face shape into several types in advance, and for example, the face shape can be divided into 7 types: melon seed face, pear-shaped face, square face, doll face, long face, standard face and mixed face.
The present embodiment may also divide the hairstyle length into several types in advance, for example, the hairstyle may be divided into short, medium and long hair. It should be noted that the method can be further subdivided into 3 main types of short hair, medium hair and long hair.
The matching of the face shape and the hairstyle generally considers the length of the hairstyle, i.e. the length of the hairstyle, for example, the long face and the square face are not suitable for selecting the hairstyle with short hair, the face length is visually lengthened, and the hairstyle with long hair is selected. The mapping relation between the face shape and the hairstyle preset in this embodiment is shown in the following table, but is not limited to this:
sequence number Face type Matched hairstyle length
1 Melon seed face Super A Ji Er short hair, cool corn scalding, middle-divided long hair straightening
2 Pear-shaped face First love, short hair of cherokee rose and high-grade S volume
3 Square face Korean qi system short hair, first love head, middle split collarbone hair
4 Doll face Antique wool roll, french big c roll and French big roll
5 Elongated face Naughty Bobo head, retro wool roll and pear flower scalding
6 Standard face Elegance lob head, wild texture ironing
7 Mixed face Xin Zhilei head, classical soft roll, elegant and lazy ironing
TABLE 1
In step S304, facial contour features of the 3D model of the target user are determined, and the hairstyle is determined according to a mapping relationship between preset facial contour features and the hairstyle.
The present embodiment can determine the facial contour features by referring to the detected facial contour feature key point information according to the detection of the user face setting area, and set the facial contour feature label for reference.
For example, the determined facial profile features may include: temple depression, zygomatic arch protrusion, mandibular angle width, etc. Referring to fig. 4, it is illustrated how the temple recess is determined. For example, a plurality of marker points may be formed on the face that enclose a facial contour, indicating a temple recess if the contour at the temple is recessed.
The cheekbones and zygomatic arches are located on the middle lateral side of the face, are important supportive and bony landmarks of the face, and protect the craniocerebral and facial muscles. Cheekbones are single bones, located under the outer part of the orbit, and are diamond-shaped, forming bony prominences of the cheek. The zygomatic arch is formed by combining two bones, namely the zygomatic process and the temporal process, and is positioned at two sides of the craniofacial bone and is in an outward arch shape. Typically, excess cheekbone affects facial side face values, and zygomatic arch prominence affects facial frontal face values.
Mandibular angular width is generally the bone hypertrophy of the mandibular corners or excessive development of the bite muscles protruding from the two sides of the cheeks, which can partially affect facial color values.
The main function of the facial profile features is to provide a reference as to whether a hair style modification is desired. The matched hairstyle is mainly decorated by adopting a hair shielding or fluffy style aiming at the protruding of the cheekbone, for example, the hairstyle can be a bang style and decorated by adopting hair waving, and the matching is also carried out on the waving degree, so that the problem of protruding of the cheekbone can be covered. For the temple recess, the matched hairstyle style mainly adopts a shielding mode, for example, the hairstyle style can be decorated by adopting a Liu-Hai style. The mandibular angle width can be modified by adopting a proper Liu-Hai style.
Wherein, the bang style that provides can subdivide into ordinary bang, oblique bang, air bang, pseudo-ginseng bang and eight characters bang etc. again, lets the user have more choices in the detail, promotes user experience. For example, air bang can effectively solve the problems of too high hairline, sunk temple and the like; the pseudo-ginseng is divided into Liu and has natural inner buckling radian, so that the facial form is well modified, particularly the problems of sunk temple and convex zygomatic arch are modified, and the young hair is more prominent in youth activity; the splayed rolling degree design of the splayed Liu is capable of effectively modifying the facial contour, simultaneously highlighting delicate five sense organs and effectively modifying the problems of narrow forehead, sunk temple and protruding cheekbone.
The mapping relation between the facial contour features and the hairstyle patterns preset in this embodiment is shown in the following table, but is not limited to this:
sequence number Facial contour features Hairstyle style
1 Zygomatic arch protrusion By Liu-hai style and by perm
2 Temple recess By Liu-Hai style
3 Angle width of lower jaw By Liu-Hai style
TABLE 2
Taking the short hair as an example, the short hair style is combined with the hair style, and can be subdivided into different style types. For example: the fine-crushing short hair ironing type is that the fine-crushing short hair ironing is a hair style which can be tried by melon seed faces, and the short hair exposing the ears can more highlight the five sense organs of people, and the whole feeling is fresh and cool and is beneficial; the finely divided bangs can cause the overall styling to be non-monotonous, and the top hair can be made to be fluffy and more gasous. Also for example: the short hair type is light and simple, and the short hair can soften facial edges and corners, so that the hair is more smart; the Liu of the notoginseng has natural inner buckling radian, can well modify the facial form, and especially can modify the concave temple and the convex zygomatic arch. Also for example: the middle-divided long straight hair style keeps the natural sense of straight hair, the tail of the hair can be properly trimmed to form layers, and the hair can be matched with the face of melon seeds to have a goddess; the middle-divided and long Liu is often scattered naturally along the face shape, so that the problems of sunk temples and protruding cheekbones can be effectively shielded.
In step S305, the facial feature of the 3D model of the target user is determined, and the hairstyle is determined according to the mapping relationship between the preset facial feature and the hairstyle.
The facial features may be eyebrow features, nose features, mouth features, or the like. The nose type may generally include a narrow nose, a middle nose, a wide nose, and the like. The mouth shape may generally include standard lip shape, thin lip, thick lip, and the like.
In this embodiment, the facial feature is taken as an example, but not limited to, of the eyebrow feature, and the corresponding hairstyle style can be matched according to the eyebrow feature. The embodiment can determine the eyebrow type feature according to the detection of the key feature points of the eyebrow area of the user. Wherein the brow-shaped features may include at least one or a combination of the following features: eyebrow shape, eyebrow color, and eyebrow shade.
The eyebrow shape can be generally classified into 3 types of flat eyebrow, picked eyebrow, standard eyebrow, but is not limited thereto. For example, standard eyebrows are characterized by a relatively non-facial form, the eyebrow edge is generally at two thirds of the entire eyebrow, the vertical line of the inner corner of the eye towards the nose bridge is the eyebrow, the bottom of the standard eyebrow is generally at a point arc degree, and the effect of lightening the two ends and deeply in the middle is achieved. For example, the eyebrow is characterized by a clean sharp edge of the eyebrow, and a relatively heavy color, the eyebrow peak is generally one half of the entire eyebrow, the edge of the eyebrow is positioned 3 mm above the eyebrow muscle, and the tail of the eyebrow is positioned at the diagonal position of the nose wing and the corner of the eyes. For example, the eyebrow flattening is characterized by a more coordinated look, a more peaceful approximation, a eyebrow tip generally at one half of the entire eyebrow, a point on the vertical line of the bridge of the nose, and a slight arc at the bottom of the tail, with the tail being on a 45 degree oblique line from the wing of the nose to the outer eye.
The primary role of the eyebrow-style feature is also to provide a reference as to whether a hair style modification is desired.
The mapping relation between the preset eyebrow feature and the hairstyle pattern in this embodiment is shown in the following table, but is not limited to this:
sequence number Eyebrow features Hairstyle style
1 Eyebrow flattening device By Liu-Hai style
2 Eyebrow picking By Liu-hai style and by perm
3 Standard eyebrow By Liu-Hai style
TABLE 3 Table 3
In step S306, the head shape of the 3D model of the target user is determined, and the matching of the hairstyle pattern is performed in combination with the head shape.
The head features may also include a head form; when the corresponding hairstyle is matched according to the head characteristics, the hairstyle can be matched by referring to the head model at the same time. The classification of the head form may generally include circular, elliptical, square, etc. For example, if the head form is square, it is generally unsuitable for short hair. According to different head types, different hair style types can be selected, such as perm or bang type, and bang type can be selected from common bang, oblique bang, air bang, etc.
In step S307, neck features and/or shoulder features of the 3D model of the target user are determined, while matching the hairstyle in combination with the neck features and/or shoulder features.
In addition to acquiring the head features, the embodiment can acquire neck features and/or shoulder features of the 3D model of the target user; the matching of hairstyles may be performed with simultaneous reference to neck features and/or shoulder features while matching the corresponding hairstyles based on head features. The neck features may generally include long necks, short necks, and the like. Shoulder features may generally include wide shoulders, narrow shoulders, and the like. For example, the neck is long, the contour of the back neck bottom of the hairstyle is also suitable for selecting an arc shape or a reverse slope shape embodying the layers according to the situation. For example, the shoulder is wide and shoulder-shaped, is suitable for long hair, and can be seen as smaller on the shoulder when the hair is used for covering the shoulder.
Therefore, in addition to considering the head characteristics, the shoulder and neck characteristics are considered at the same time, so that the stature characteristics of the user can be more comprehensively reflected, and the recommendation of the hairstyle can be more accurate by combining the comprehensive stature characteristics.
The steps S304, S305, S306, and S307 do not have a necessary sequence relationship.
In step S308, the freckle feature and/or the acne mark feature of the 3D model of the target user are determined, and the hair color of the hairstyle is determined according to the preset mapping relationship between the freckle feature, the acne mark feature and the hair color.
Freckle is a yellow brown punctate pigmentation spot on facial skin, generally circular, oval or irregular. Acne marks are marks left after acne grows on the face, and the acne marks are in a dark and light scale. Acne marks are also classified into a number of types, generally including black acne marks, red acne marks, pitted acne marks, and the like.
The present embodiment can determine whether or not freckles or acne marks are contained based on detection of the facial area of the user. The information of the freckle characteristics may include at least one of the following: shade information, color information, shape information, size information, density information, and the like.
The mapping relationship between the freckle characteristics, the acne mark characteristics and the color development preset in this embodiment is shown in the following table, but is not limited thereto:
Sequence number Freckle and acne mark characteristics Color development
1 With freckles Chocolate colour, caramel colour
2 With acne marks Black brown, cool brown
TABLE 4 Table 4
The step S304, the step S305, the step S306, the step S307, and the step S308 do not necessarily have a sequential relationship.
In step S309, the pupil color of the 3D model of the target user is determined, and the color of the hairstyle is determined according to the preset mapping relationship between the pupil color and the color of the hair.
According to the embodiment, the pupil color of the user can be determined by acquiring the key feature points, the color values and the brightness values of the eye area of the 3D model of the target user and performing color analysis according to a set algorithm, and different color development can be recommended according to different pupil colors.
The key feature points may include, for example, white, eyeball, iris, and other feature points. Wherein, the iris can adjust the size of the pupil, and the iris contains pigment with the largest pigment amount, namely black eyes; the pigment amount is large, and the eyes are brown generally; the least amount of pigment is typically blue eye. An image of the above feature points is acquired, wherein the image of the above feature points can be expressed by pixels, and various colors can be described by calculating RGB (Red, green, blue, red, green, blue) average values of the respective pixels. Specifically, the color of each region of the eye can be calculated from the RGB average value of each pixel, and the color of the pupil can be finally determined from the calculated color of each region of the eye.
It should be noted that, the color of hair for determining hairstyle may be mainly matching the color of hair with the pupil color, and the matching of the color of hair may be further performed by combining the freckle feature and/or the acne mark feature.
The setting algorithm may be, for example, but not limited to, a color feature extraction algorithm. The color feature extraction algorithm may, for example, be a color histogram method or a wavelet-based segmented image color feature extraction method. Taking the color histogram method as an example, the color histogram is a color feature widely used in many image retrieval systems. It describes the proportion of different colors in the whole image, and does not care about the spatial position of each color, i.e. the object or object in the image cannot be described. When calculating the color histogram, the color space is divided into several small color bins, each of which becomes a bin of the histogram, a process called color quantization. Then, a color histogram can be obtained by calculating the number of pixels whose color falls within each cell.
The present embodiment can recognize and analyze the pupil color by color information in the vicinity of the eye key feature point left_eye_center point.
Pupil colors in this embodiment may include a variety of colors: black, brown, amber, gray, blue, green, silver, hazelnut, off-color, center off-color, and the like.
The color development in this embodiment may include a variety of colors, such as black, black brown, lightest flax, noble flax, hazy flax gray, chocolate, brown caramel, honey brown, tea brown, cool brown, warm brown, champagne gold, milk gray, haze blue, and the like. Wherein the black tea color is black with some tea colors; caramel color is the effect of a combination of brown and other colors; the brown honey is a brown effect with a high proportion and a little dark brown effect in sunlight.
Generally, the pupil color of the warm color matches the color of the hair corresponding to the warm color; the pupil color of the cold color matches the color development of the corresponding color.
The mapping relationship between pupil color and color development preset in this embodiment is shown in the following table, but is not limited thereto:
TABLE 5
The step S309 and the steps of matching the hair color and the preceding step of matching the hair style length or the hair style pattern may be performed separately and independently without any necessary sequence relationship with the step S308.
In step S310, the matched hairstyle and color are displayed on the target user 3D model.
The embodiment can remove the original hairstyle part of the 3D model of the target user, and display the matched hairstyle and haircolor for the user to refer to. It should be noted that, if the matching hairstyle and color are two or more, the embodiment may switch the hairstyle and color of the 3D model of the target user, where the switching manner includes automatic switching or manual switching according to the selection of the user. Through setting the switching mode, the target user can screen and compare various hairstyles and colors, more selection suggestions are provided for the target user, and then the target user determines a final result according to own preference, so that experience and functionality are improved.
In the embodiment, a plurality of popular and classical hairstyles can be provided for users to switch at will. Furthermore, corresponding methods of treatment and hair cutting points can be provided for different hair styles. In addition, the system also has the functions of storing video, forwarding and storing the drawing.
In this embodiment, various operations such as zooming in, zooming out, and rotating can be performed on the 3D model of the target user matching the hairstyle and the hair color, so that the hairstyle and the hair color can be more vividly displayed from different angles. It should be noted that, the eyebrow-type feature may not be considered in this embodiment.
According to the scheme of the embodiment, various characteristics of a user can be integrated, such as recommendation of hairstyles and hair colors according to pupil colors and head characteristics, wherein the head characteristics can comprise head types, face types, facial profile characteristics (such as sunk temples, protruding cheekbones and wide angles of lower jaw), facial appearance characteristics (such as eyebrow type characteristics, nose type characteristics and mouth type characteristics) and the like, so that the optimum own hairstyles and hair colors can be provided for each user, more accurate recommendation reference comments are provided for the user, and user experience is improved.
Corresponding to the embodiment of the application function implementation method, the application also provides a 3D model generation device, equipment and corresponding embodiments.
Fig. 5 is a schematic structural diagram of a 3D model generating device according to an embodiment of the present application.
Referring to fig. 5, the 3D model generating apparatus 50 includes: an image acquisition module 51, a model generation module 52, a hair color matching module 53, a hair style matching module 54, and a hair style display module 55.
The image acquisition module 51 is configured to acquire a face image of a target user. The image acquisition module 51 acquires a face image of the target user, which may be a picture of the front face and/or the side face of the user.
The model generating module 52 is configured to generate a 3D model of the target user according to the face image acquired by the image acquiring module 51. The model generation module 52 may generate a target user 3D model from the obtained pictures of the user's front face and/or side faces.
The color matching module 53 is configured to obtain the pupil color of the target user 3D model generated by the model generating module 52, and match the corresponding color according to the pupil color. The color matching module 53 may determine the pupil color of the user by acquiring key feature points, color values, and brightness values of the eye area of the 3D model of the target user, and performing color analysis according to a set algorithm. After the pupil color of the user is determined, the corresponding color can be matched according to the mapping relation between the pupil color and the color. The setting algorithm may be, for example, but not limited to, a color feature extraction algorithm. The color feature extraction algorithm may be, for example, a color histogram method or a wavelet-based segmented image color feature extraction method, wherein the specific operation mode of the color feature extraction may be implemented by various means, which is not described in detail herein.
And the hairstyle matching module 54 is configured to obtain the head characteristics of the 3D model of the target user generated by the model generating module 52, and match the corresponding hairstyle according to the head characteristics. The head features of the target user 3D model include facial shapes and facial features, and the hairstyle matching module 54 may match the corresponding hairstyle length according to the facial shapes and the corresponding hairstyle style according to the facial features.
And a hairstyle display module 55 for displaying the matched hairstyle and haircolor on the 3D model of the target user generated by the model generation module 52. When the matched hairstyle and hair color are two or more, the hairstyle display module 55 may switch the hairstyle and hair color of the 3D model of the target user in a manner including automatic switching or manual switching according to the user's selection.
Fig. 6 is another schematic structural diagram of the 3D model generating apparatus shown in the embodiment of the present application.
Referring to fig. 6, the 3D model generating apparatus 50 includes: an image acquisition module 51, a model generation module 52, a hair color matching module 53, a hair style matching module 54, and a hair style display module 55.
The functions of the image acquisition module 51, the model generation module 52, the hair color matching module 53, the hair style matching module 54, and the hair style display module 55 can be seen in the description of fig. 5, and will not be repeated here.
The hairstyle matching module 54 may include: a face processing sub-module 541, and a facial feature processing sub-module 542.
The face processing submodule 541 is configured to match a corresponding hairstyle length according to a face shape in the head feature. The face shape may be set to several types in advance in this embodiment, and for example, the face shape may be divided into 7 types: melon seed face, pear-shaped face, square face, doll face, long face, standard face and mixed face. The hairstyle length may be divided into several types in advance in this embodiment, for example, hairstyles may be divided into short, medium and long hair. It should be noted that the method can be further subdivided into 3 main types of short hair, medium hair and long hair.
The facial feature processing sub-module 542 is configured to match a corresponding hairstyle style according to facial features in the head features. The facial features may include facial contour features and facial features, and the facial contour features may include at least one of temple recesses, zygomatic arch projections, mandibular angular widths, and the facial features may include brow features, nose features, or mouth features, among others.
The hairstyle matching module 54 may further include: a header processing sub-module 543.
The head processing sub-module 543 is configured to match a corresponding hair style according to a head shape in the head feature. The classification of the head form may generally include circular, elliptical, square, etc. When the corresponding hairstyle pattern is matched according to the head characteristics, the head patterns can be simultaneously referred to for matching. Long hair can be matched, for example, if the head form is square, which is generally unsuitable for short hair. According to different head types, different hair style types can be selected, such as perm or bang type, and bang type can be selected from common bang, oblique bang, air bang, etc.
It should be noted that, the color matching module 53 may also determine the color of the hairstyle according to the preset mapping relationship between the freckle feature, the acne mark feature and the color. Freckle is a yellow brown punctate pigmentation spot on facial skin, generally circular, oval or irregular. Acne marks are marks left after acne grows on the face, and the acne marks are in a dark and light scale. Acne marks are also classified into a number of types, generally including black acne marks, red acne marks, pitted acne marks, and the like. For the case of freckles, the color development can be generally selected from chocolate color, caramel coffee color and the like, but is not limited thereto; for the case of acne marks, the color may be selected from black brown, cool brown, etc., but is not limited thereto. In addition, when the hairstyle matching module 54 matches the corresponding hairstyle according to the head characteristics, the hairstyle matching module may also match the hairstyle by combining the neck characteristics and/or the shoulder characteristics.
In this embodiment, the corresponding hair color may be matched according to the pupil color, the corresponding hair style length may be matched according to the user face shape, and the corresponding hair style may be matched according to the facial feature. Further, the corresponding hairstyle length can be matched according to the face shape of the user, and the corresponding hairstyle style can be matched according to the head shape, the facial contour characteristics, the facial appearance characteristics, the neck characteristics, the shoulder characteristics and the like. Therefore, more characteristics of users can be comprehensively considered, so that the recommended hairstyle and color are more accurate.
The specific manner in which the respective modules perform the operations in the apparatus of the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail herein.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may be, but is not limited to, a mobile terminal device.
Referring to fig. 7, the electronic device 1000 includes a memory 1010 and a processor 1020.
The processor 1020 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 1010 may include various types of storage units, such as system memory, read Only Memory (ROM), and persistent storage. Where the ROM may store static data or instructions that are required by the processor 1020 or other modules of the computer. The persistent storage may be a readable and writable storage. The persistent storage may be a non-volatile memory device that does not lose stored instructions and data even after the computer is powered down. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the persistent storage may be a removable storage device (e.g., diskette, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as dynamic random access memory. The system memory may store instructions and data that are required by some or all of the processors at runtime. Furthermore, memory 1010 may comprise any combination of computer-readable storage media including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic disks, and/or optical disks may also be employed. In some implementations, memory 1010 may include readable and/or writable removable storage devices such as Compact Discs (CDs), digital versatile discs (e.g., DVD-ROMs, dual-layer DVD-ROMs), blu-ray discs read only, super-density discs, flash memory cards (e.g., SD cards, min SD cards, micro-SD cards, etc.), magnetic floppy disks, and the like. The computer readable storage medium does not contain a carrier wave or an instantaneous electronic signal transmitted by wireless or wired transmission.
The memory 1010 has stored thereon executable code that, when processed by the processor 1020, can cause the processor 1020 to perform some or all of the methods described above.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. Those skilled in the art will also appreciate that the acts and modules referred to in the specification are not necessarily required in the present application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined and pruned according to actual needs, and the modules in the apparatus of the embodiment of the present application may be combined, divided and pruned according to actual needs.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing part or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform some or all of the steps of the above-described methods according to the present application.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the application herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (11)

1. A method for generating a 3D model, comprising:
acquiring a face image of a target user;
generating a target user 3D model according to the face image;
obtaining pupil colors of the target user 3D model, and matching corresponding color development according to the pupil colors, wherein the method comprises the following steps: the freckle characteristics and/or acne mark characteristics of the 3D model face of the target user are obtained; when the corresponding color is matched according to the pupil color, matching the color by combining the freckle characteristics and/or the acne mark characteristics;
acquiring head characteristics of the 3D model of the target user, and matching a corresponding hairstyle according to the head characteristics;
Displaying the matched hairstyle and color on the 3D model of the target user;
wherein the obtaining the pupil color of the target user 3D model includes: the method comprises the steps of obtaining key feature points, color values and brightness values of an eye area of a 3D model of a target user, and carrying out color analysis according to a set algorithm to determine pupil colors of the target user;
the key feature points comprise white eyes, eyeballs and irises, and the determining the pupil color of the target user by acquiring the key feature points, color values and brightness values of the eye area of the 3D model of the target user and performing color analysis according to a set algorithm comprises the following steps:
acquiring color values and brightness values corresponding to the eye white, the eyeball and the iris;
calculating RGB average values of the eye white, the eyeball and the iris respectively by utilizing the color value and the brightness value;
and performing color analysis on RGB average values of the eye white, the eyeball and the iris according to the setting algorithm, and determining the pupil color of the target user.
2. The method of claim 1, wherein the head features include facial shapes and facial features;
the step of matching the corresponding hairstyle according to the head characteristics comprises the following steps:
And matching the corresponding hairstyle according to the facial features and the corresponding hairstyle length.
3. The method according to claim 2, characterized in that:
the facial features include: facial contour features and facial appearance features;
the facial profile features include at least one of a temple concavity, a zygomatic arch convexity, and a mandibular angular width;
the facial features include at least one of eyebrow features, nose features, and mouth features.
4. The method according to claim 2, characterized in that:
the head features further include a head form;
the step of matching the corresponding hairstyle according to the head characteristics comprises the following steps:
and matching the corresponding hairstyle length according to the facial features and the corresponding hairstyle style according to the head shape.
5. The method according to claim 1, characterized in that:
the method further comprises the steps of: acquiring neck features and/or shoulder features of the target user 3D model;
and when the corresponding hairstyle is matched according to the head characteristics, the neck characteristics and/or the shoulder characteristics are combined at the same time to match the hairstyle.
6. The method according to any one of claims 1 to 5, wherein said displaying the matched hairstyle and hair color on the target user 3D model comprises:
When the matched hairstyle and hair color are two or more, the hairstyle and hair color of the 3D model of the target user can be switched, and the switching comprises automatic switching or manual switching according to the selection of the user.
7. A 3D model generation apparatus, comprising:
the image acquisition module is used for acquiring a face image of a target user;
the model generation module is used for generating a target user 3D model according to the face image acquired by the image acquisition module;
the color matching module is used for acquiring the pupil color of the target user 3D model generated by the model generating module, and matching the corresponding color according to the pupil color, wherein the color matching module comprises: the freckle characteristics and/or acne mark characteristics of the 3D model face of the target user are obtained; when the corresponding color is matched according to the pupil color, matching the color by combining the freckle characteristics and/or the acne mark characteristics;
the hairstyle matching module is used for acquiring the head characteristics of the 3D model of the target user generated by the model generating module and matching the corresponding hairstyle according to the head characteristics;
the hairstyle display module is used for displaying the matched hairstyle and haircolor on the 3D model of the target user generated by the model generation module;
Wherein, the chromophoric matching module is used for: the method comprises the steps of obtaining key feature points, color values and brightness values of an eye area of a 3D model of a target user, and carrying out color analysis according to a set algorithm to determine pupil colors of the target user;
the key feature points comprise white eyes, eyeballs and irises, and the chromophoric matching module is further used for:
acquiring color values and brightness values corresponding to the eye white, the eyeball and the iris;
calculating RGB average values of the eye white, the eyeball and the iris respectively by utilizing the color value and the brightness value;
and performing color analysis on RGB average values of the eye white, the eyeball and the iris according to the setting algorithm, and determining the pupil color of the target user.
8. The apparatus of claim 7, wherein the hairstyle matching module comprises:
the facial form processing submodule is used for matching the corresponding hairstyle length according to the facial form in the head characteristic;
and the facial feature processing sub-module is used for matching a corresponding hairstyle style according to the facial features in the head features.
9. The apparatus of claim 8, wherein the hairstyle matching module further comprises:
And the head type processing sub-module is used for matching the corresponding hairstyle type according to the head type in the head characteristic.
10. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any of claims 1-6.
11. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any of claims 1-6.
CN202010449523.XA 2020-05-25 2020-05-25 3D model generation method, device and equipment Active CN113724366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010449523.XA CN113724366B (en) 2020-05-25 2020-05-25 3D model generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010449523.XA CN113724366B (en) 2020-05-25 2020-05-25 3D model generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN113724366A CN113724366A (en) 2021-11-30
CN113724366B true CN113724366B (en) 2024-02-27

Family

ID=78671601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010449523.XA Active CN113724366B (en) 2020-05-25 2020-05-25 3D model generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN113724366B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020014844A (en) * 2000-07-18 2002-02-27 최창석 Three dimensional face modeling method
JP2011101823A (en) * 2011-02-14 2011-05-26 Kao Corp Hairstyle advice method
TW201301147A (en) * 2011-06-16 2013-01-01 Jian-Wen Peng An intelligent system and algorithms of facial analysis and hairstyle fitting
CN105069180A (en) * 2015-06-19 2015-11-18 上海卓易科技股份有限公司 Hair style design method and system
CN107545051A (en) * 2017-08-23 2018-01-05 武汉理工大学 Hair style design system and method based on image procossing
CN107808136A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and computer equipment
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
KR101959871B1 (en) * 2017-12-20 2019-03-19 이해분 Apparatus and method for providing hair style recommendation information
KR20190116052A (en) * 2018-04-03 2019-10-14 고려대학교 산학협력단 Deep learning based personal color diagnosis and virtual make-up method and apparatus
CN110580484A (en) * 2018-06-07 2019-12-17 郭小平 Intelligent hairstyle matching system
CN111161035A (en) * 2019-12-31 2020-05-15 北京三快在线科技有限公司 Dish recommendation method and device, server, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070058858A1 (en) * 2005-09-09 2007-03-15 Michael Harville Method and system for recommending a product based upon skin color estimated from an image
EP1862089A1 (en) * 2006-06-01 2007-12-05 Carlo Guidi Method for selecting a harmonious hair colour suitable for an individual

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020014844A (en) * 2000-07-18 2002-02-27 최창석 Three dimensional face modeling method
JP2011101823A (en) * 2011-02-14 2011-05-26 Kao Corp Hairstyle advice method
TW201301147A (en) * 2011-06-16 2013-01-01 Jian-Wen Peng An intelligent system and algorithms of facial analysis and hairstyle fitting
CN105069180A (en) * 2015-06-19 2015-11-18 上海卓易科技股份有限公司 Hair style design method and system
CN107545051A (en) * 2017-08-23 2018-01-05 武汉理工大学 Hair style design system and method based on image procossing
CN107808136A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and computer equipment
KR101959871B1 (en) * 2017-12-20 2019-03-19 이해분 Apparatus and method for providing hair style recommendation information
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
KR20190116052A (en) * 2018-04-03 2019-10-14 고려대학교 산학협력단 Deep learning based personal color diagnosis and virtual make-up method and apparatus
CN110580484A (en) * 2018-06-07 2019-12-17 郭小平 Intelligent hairstyle matching system
CN111161035A (en) * 2019-12-31 2020-05-15 北京三快在线科技有限公司 Dish recommendation method and device, server, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113724366A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
CN112784773B (en) Image processing method and device, storage medium and terminal
US20200042769A1 (en) Face image processing methods and apparatuses, and electronic devices
CN110390632B (en) Image processing method and device based on dressing template, storage medium and terminal
CN101055647B (en) Method and device for processing image
CN108171789B (en) Virtual image generation method and system
CN109376582A (en) A kind of interactive human face cartoon method based on generation confrontation network
CN109690617A (en) System and method for digital vanity mirror
JP2019527410A (en) Method for hiding objects in images or videos and related augmented reality methods
EP0897680A2 (en) Method for selecting suitable hairstyle and image-map for hairstyle
CN107820591A (en) Control method, controller, Intelligent mirror and computer-readable recording medium
JP2005038375A (en) Eye configuration classifying method and configuration classifying map and eye make-up method
WO2020177434A1 (en) Image processing method and apparatus, image device, and storage medium
US10512321B2 (en) Methods, systems and instruments for creating partial model of a head for use in hair transplantation
CN107784630B (en) Method, device and terminal for turning attributes of face image
CN105913416A (en) Method for automatically segmenting three-dimensional human face model area
CA2667526A1 (en) Method and device for the virtual simulation of a sequence of video images
CN105405157A (en) Portrait Generating Device And Portrait Generating Method
CN110866139A (en) Cosmetic treatment method, device and equipment
WO2021082787A1 (en) Virtual operation object generation method and device, storage medium and electronic apparatus
KR101987189B1 (en) Makeup recommendation kiosk
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
CN113724366B (en) 3D model generation method, device and equipment
CN108833772A (en) Taking pictures based on depth camera guides system and method
CN112508777A (en) Beautifying method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant