CN113724366A - 3D model generation method, device and equipment - Google Patents

3D model generation method, device and equipment Download PDF

Info

Publication number
CN113724366A
CN113724366A CN202010449523.XA CN202010449523A CN113724366A CN 113724366 A CN113724366 A CN 113724366A CN 202010449523 A CN202010449523 A CN 202010449523A CN 113724366 A CN113724366 A CN 113724366A
Authority
CN
China
Prior art keywords
color
target user
model
hair
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010449523.XA
Other languages
Chinese (zh)
Other versions
CN113724366B (en
Inventor
傅艳
娄心怡
朱嘉
宋莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN202010449523.XA priority Critical patent/CN113724366B/en
Publication of CN113724366A publication Critical patent/CN113724366A/en
Application granted granted Critical
Publication of CN113724366B publication Critical patent/CN113724366B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The application relates to a 3D model generation method, a device and equipment. The 3D model generation method comprises the following steps: acquiring a face image of a target user; generating a target user 3D model according to the face image; acquiring the pupil color of the 3D model of the target user, and matching the corresponding color according to the pupil color; acquiring the head characteristics of the 3D model of the target user, and matching a corresponding hair style according to the head characteristics; displaying the matched hairstyle and hair color on the target user 3D model. The scheme provided by the application can accurately recommend the appropriate hairstyle and hair color for the user.

Description

3D model generation method, device and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for generating a 3D model.
Background
With the continuous improvement of living standard, people pay more and more attention to the improvement of mental level while pursuing the improvement of material life, so people pay more and more attention to clothes and makeup. In the fashion life trend, the proper hairstyle and hair color can greatly improve personal image quality and self-confidence of the user in the presence of other people.
In the related art, a 2D hair style picture is generally provided for a user to select, and the user can select a hair style and a hair color in advance through the 2D hair style picture provided by a barber shop, but conditions such as the five sense organs and the face of the user may not be the same as a model in the hair style picture, and the hair style of the model is not necessarily suitable for the user, so that the effect can be seen only after the hair is trimmed. It can be seen that the 2D hair style picture does not reflect the effect of the user after using the recommended hair style and hair color.
Therefore, the method in the related art cannot accurately recommend an appropriate hair style and hair color to the user through the 2D hair style picture.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a 3D model generation method, device and equipment, and the 3D model generation method, device and equipment can accurately recommend an appropriate hairstyle and hair color for a user.
A first aspect of the present application provides a 3D model generation method, including:
acquiring a face image of a target user;
generating a target user 3D model according to the face image;
acquiring the pupil color of the 3D model of the target user, and matching the corresponding color according to the pupil color;
acquiring the head characteristics of the 3D model of the target user, and matching a corresponding hair style according to the head characteristics;
displaying the matched hairstyle and hair color on the target user 3D model.
In one embodiment, the head features include a facial form and facial features;
the step of matching a corresponding hairstyle according to the head features comprises:
and matching the corresponding hairstyle length according to the face shape, and matching the corresponding hairstyle pattern according to the facial features.
In one embodiment, the facial features include: facial contour features and facial appearance features;
the facial contour features comprise at least one of a temple depression, a zygomatic arch protrusion and a mandibular angle width;
the facial appearance feature includes at least one of an eyebrow feature, a nose feature, and a mouth feature.
In one embodiment, the head feature further comprises a head form;
the step of matching a corresponding hairstyle according to the head features comprises:
and matching the corresponding hairstyle length according to the face shape, and matching the corresponding hairstyle style according to the facial features and the head shape.
In one embodiment, the method further comprises: acquiring neck features and/or shoulder features of the target user 3D model;
when the corresponding hairstyle is matched according to the head characteristics, the matching of the hairstyle is simultaneously carried out by combining the neck characteristics and/or the shoulder characteristics.
In one embodiment, the method further comprises: acquiring freckle characteristics and/or acne mark characteristics of the 3D model face of the target user;
and when the corresponding hair color is matched according to the pupil color, simultaneously combining the freckle characteristic and/or the acne mark characteristic to match the hair color.
In one embodiment, the obtaining the pupil color of the 3D model of the target user includes:
the pupil color of the target user is determined by obtaining key feature points, color values and brightness values of the eye area of the 3D model of the target user and performing color analysis according to a set algorithm.
In one embodiment, the displaying the matched hairstyle and hair color on the target user 3D model comprises:
and when the matched hair style and color are two or more, switching the hair style and color of the 3D model of the target user, wherein the switching comprises automatic switching or manual switching according to the selection of the user.
A second aspect of the present application provides a 3D model generation apparatus, including:
the image acquisition module is used for acquiring a face image of a target user;
the model generation module is used for generating a 3D model of the target user according to the face image acquired by the image acquisition module;
the color development matching module is used for acquiring the pupil color of the target user 3D model generated by the model generation module and matching the corresponding color development according to the pupil color;
the hairstyle matching module is used for acquiring the head characteristics of the target user 3D model generated by the model generation module and matching a corresponding hairstyle according to the head characteristics;
and the hair style display module is used for displaying the matched hair style and hair color on the target user 3D model generated by the model generation module.
In one embodiment, the hairstyle matching module comprises:
the face type processing submodule is used for matching the corresponding hairstyle length according to the face type in the head characteristics;
and the facial feature processing sub-module is used for matching the corresponding hair style pattern according to the facial features in the head features.
In one embodiment, the hair style matching module further comprises:
and the head type processing submodule is used for matching a corresponding hair style pattern according to the head type in the head characteristics.
A third aspect of the present application provides an electronic device comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method described above.
A fourth aspect of the present application provides a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the above-mentioned method.
The technical scheme provided by the application can comprise the following beneficial effects:
according to the scheme, after the face image of the target user is obtained, the 3D model of the target user is generated according to the face image, the pupil color of the 3D model of the target user is obtained, the head characteristic of the 3D model of the target user is obtained, the corresponding hair color is matched according to the pupil color, the corresponding hair style is matched according to the head characteristic, and finally the matched hair style and hair color are displayed on the 3D model of the target user. Because the target user 3D model is generated according to the face image of the user, the three-dimensional effect of the target user 3D model is more fit with the actual situation of the target user, and the actual appearance characteristics of the target user can be accurately reflected, the target user 3D model can be used for selecting the hair-exchanging experience fitting the actual effect for the target user. Furthermore, the corresponding hair color is matched according to the pupil color of the 3D model of the target user and the corresponding hair style is matched according to the head characteristic of the 3D model of the target user, so that the recommendation result can be matched with the actual appearance characteristic of the user, the recommendation is more accurate and vivid, the reference value is provided for the user, and the use experience of the user can be improved.
According to the scheme of the application, the head characteristics can comprise a face shape and face characteristics, and the corresponding hairstyle length can be matched according to the face shape, and the corresponding hairstyle pattern can be matched according to the face characteristics; the facial contour feature comprises at least one of a temple depression, a zygomatic arch protrusion and a mandibular angle width, and the facial appearance feature comprises at least one of an eyebrow feature, a nose feature and a mouth feature. Thus, a plurality of characteristics finer than the user can be comprehensively considered, so that the recommended hairstyle and hair color are more accurate.
According to the scheme, when the corresponding hairstyle is matched, the head shape, the neck characteristics or the shoulder characteristics and the like can be further referred, so that the hairstyle and the hair color are more comprehensively considered and more accurately recommended.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application, as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flow chart of a 3D model generation method according to an embodiment of the present disclosure;
FIG. 2 is another schematic flow chart diagram illustrating a 3D model generation method according to an embodiment of the present disclosure;
FIG. 3 is another schematic flow chart diagram illustrating a 3D model generation method according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a method for determining temporal dip in a 3D model generation method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a 3D model generation apparatus according to an embodiment of the present application;
fig. 6 is another schematic structural diagram of a 3D model generation apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
With the coming of the color age, people pay attention to clothes and makeup for beautiful display and pursuit, and the appropriate hairstyle and color development are helpful to improve personal images. In the related art, only by generating 2D image information to recommend a hair style to a user, a suitable hair style cannot be accurately recommended to the user, and hair color selection cannot be further implemented. In view of the above problems, embodiments of the present application provide a 3D model generation method, which can accurately recommend an appropriate hairstyle and color to a user. Specifically, the technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a 3D model generation method according to an embodiment of the present application.
Referring to fig. 1, the method flow includes:
in step S101, a face image of a target user is acquired.
The face image of the target user is acquired, and may be a picture of the front face and/or the side face of the user. Specifically, the user can be prompted to move the head to a shooting area, and a camera of the terminal collects a frontal face picture of the user; and then, the user rotates the head leftwards and rightwards according to the prompt, and the camera of the terminal collects the side face picture of the user, so that the face image of the target user is obtained, and the 3D model of the target user can be generated according to the collected image picture information subsequently.
It should be further noted that one front face picture and two side face pictures uploaded by the user may also be directly obtained to obtain the face image of the target user.
In step S102, a 3D model of the target user is generated from the face image.
Generating the 3D model of the target user from the face image may be generating the 3D model of the target user from a picture of the user's front face and/or side faces.
In step S103, the pupil color of the 3D model of the target user is acquired, and the corresponding color is matched according to the pupil color.
The pupil color of the target user 3D model is obtained, and the pupil color of the user may be determined by obtaining key feature points, color values, and brightness values of the eye region of the target user 3D model, and performing color analysis according to a set algorithm. After the pupil color of the user is determined, the corresponding color is matched according to the mapping relation between the pupil color and the color.
The key feature points may include, for example, feature points such as white eyes, eyeballs, irises, and the like. Wherein, the iris can adjust the size of the pupil, the inner part contains pigment, the pigment amount is the most, generally, the eye is black; the pigment is more in quantity and is generally brown eyes; the least amount of pigment, typically blue eyes. Acquiring images of the feature points, wherein the images of the feature points can be expressed by pixels, and various colors can be described by calculating the average value of RGB (Red, Green, Blue, Red, Green, Blue) of each pixel. Specifically, the color of each region of the eye can be calculated from the RGB average value of each pixel, and the color of the pupil can be finally determined from the calculated color of each region of the eye. The setting algorithm may be, for example, a color feature extraction algorithm, but is not limited thereto. The color feature extraction algorithm may be, for example, a color histogram method or a wavelet-based block image color feature extraction method, wherein a specific operation manner related to color feature extraction may be implemented by various means, which is not described in detail herein.
In step S104, the head feature of the 3D model of the target user is obtained, and the corresponding hair style is matched according to the head feature.
Specifically, the head features of the 3D model of the target user include a face shape and face features, and the corresponding hairstyle length can be matched according to the face shape, and the corresponding hairstyle style can be matched according to the face features. The facial features may include facial contour features, and the facial contour features may include: the features of the outline frame are reflected by the depression of the temple, the protrusion of the zygomatic arch, the width of the mandible angle and the like, and the corresponding hairstyle pattern is matched according to the mapping relation between the facial outline feature and the hairstyle pattern.
It should be noted that step S103 and step S104 do not necessarily have a sequential relationship.
In step S105, the matched hairstyle and hair color are displayed on the target user 3D model.
When the matched hair style and hair color are two or more, the hair style and the hair color of the 3D model of the target user can be switched, and the switching mode comprises automatic switching or manual switching according to the selection of the user. Through the setting of the switching mode, the target user can screen and compare various hair styles and hair colors, more selection suggestions are provided for the target user, the target user determines a final result according to the preference of the target user, and the experience and the functionality are improved.
In this embodiment, after the face image of the target user is acquired, the 3D model of the target user is generated according to the face image, the pupil color of the 3D model of the target user and the head feature of the 3D model of the target user are acquired, the corresponding hair style is matched according to the pupil color, the corresponding hair style is matched according to the head feature, and finally the matched hair style and hair color are displayed on the 3D model of the target user. Because the target user 3D model is generated according to the face image of the user, the three-dimensional effect of the target user 3D model is more fit with the actual situation of the target user, and the actual appearance characteristics of the target user can be accurately reflected, the target user 3D model can be used for selecting the hair-exchanging experience fitting the actual effect for the target user. Furthermore, the corresponding hair color is matched according to the pupil color of the 3D model of the target user and the corresponding hair style is matched according to the head characteristic of the 3D model of the target user, so that the recommendation result can be matched with the actual appearance characteristic of the user, the recommendation is more accurate and vivid, the reference value is provided for the user, and the use experience of the user can be improved.
Fig. 2 is another schematic flow chart of a 3D model generation method according to an embodiment of the present disclosure. Fig. 2 may match corresponding hairstyles according to the facial form and facial features, wherein the facial features are illustrated as facial features.
Referring to fig. 2, the method flow includes:
in step S201, a face image of a target user is acquired.
The face image of the target user is acquired, and may be a picture of the front face and/or the side face of the user. Specifically, the user can be prompted to move the head to a shooting area, and a camera of the terminal collects a frontal face picture of the user; and then, the user rotates the head leftwards and rightwards according to the prompt, and the camera of the terminal collects the side face picture of the user, so that the face image of the target user is obtained, and the 3D model of the target user can be generated according to the collected image picture information subsequently.
It should be further noted that one front face picture and two side face pictures uploaded by the user may also be directly obtained to obtain the face image of the target user.
In step S202, a 3D model of the target user is generated from the face image.
Generating the 3D model of the target user from the face image may be generating the 3D model of the target user from a picture of the user's front face and/or side faces.
In step S203, the pupil color of the 3D model of the target user is acquired, and the corresponding color is matched according to the pupil color.
The pupil color of the target user 3D model is obtained, and the pupil color of the user may be determined by obtaining key feature points, color values, and brightness values of the eye region of the target user 3D model, and performing color analysis according to a set algorithm. After the pupil color of the user is determined, the corresponding color is matched according to the mapping relation between the pupil color and the color.
The key feature points may include, for example, feature points such as white eyes, eyeballs, irises, and the like. Wherein, the iris can adjust the size of the pupil, the inner part contains pigment, the pigment amount is the most, generally, the eye is black; the pigment is more in quantity and is generally brown eyes; the least amount of pigment, typically blue eyes. Acquiring images of the feature points, wherein the images of the feature points can be expressed by pixels, and various colors can be described by calculating the average value of RGB (Red, Green, Blue, Red, Green, Blue) of each pixel. Specifically, the color of each region of the eye can be calculated from the RGB average value of each pixel, and the color of the pupil can be finally determined from the calculated color of each region of the eye.
The setting algorithm may be, for example, a color feature extraction algorithm, but is not limited thereto. The color feature extraction algorithm may be, for example, a color histogram method or a wavelet-based block image color feature extraction method, wherein a specific operation manner related to color feature extraction may be implemented by various means, which is not described in detail herein.
In step S204, the face shape and the face features of the 3D model of the target user are acquired, and the corresponding hairstyle is matched according to the face shape and the face features.
The corresponding hairstyle is matched according to the face shape and the facial features, the corresponding hairstyle length can be matched according to the face shape, and the corresponding hairstyle style can be matched according to the facial features. The facial features may include facial features, and the facial features may be eyebrow, nose, or mouth type features, among others. The nasal type may generally include narrow nose, middle nose, broad nose, and the like. The mouth shape may generally include standard lip shapes, thin lips, thick lips, and the like. Taking the facial appearance characteristic as an eyebrow shape characteristic as an example, the corresponding hairstyle length can be matched according to the face shape, and the corresponding hairstyle pattern can be matched according to the eyebrow shape characteristic. Wherein the brow feature may comprise at least one or a combination of the following: the eyebrow shape, the eyebrow color and the eyebrow depth can be matched with the corresponding hairstyle pattern according to the mapping relation between the eyebrow shape and the hairstyle pattern according to the specific eyebrow shape characteristic.
It should be noted that step S203 and step S204 do not necessarily have a sequential relationship.
In step S205, the matched hairstyle and hair color are displayed on the target user 3D model.
When the matched hair style and color are two or more, the hair style and color of the 3D model of the target user can be switched, and the switching mode comprises automatic switching or manual switching according to the selection of the user. Through the setting of the switching mode, the target user can screen and compare various hair styles and hair colors, more selection suggestions are provided for the target user, the target user determines a final result according to the preference of the target user, and the experience and the functionality are improved.
In the embodiment, the corresponding hairstyle pattern can be matched according to the face shape of the user and the corresponding hairstyle pattern can be matched according to the facial contour feature, such as the eyebrow feature. Thus, more characteristics of the user can be comprehensively considered, and the recommended hairstyle and hair color are more accurate.
Fig. 3 is another schematic flow chart of a 3D model generation method according to an embodiment of the present disclosure. Fig. 3 describes the solution of the present application in more detail with respect to fig. 1 and 2, also taking into account more features.
In this embodiment, a target user 3D model of the user may be generated from a face image of the user, and a hair style and hair color may be recommended according to pupil color, head features, neck features, shoulder features, and the like. The head features may include a head shape, a face shape, and facial features, the facial features may include facial contour features and facial appearance features, the facial contour features may include at least one of a temporal depression, a zygomatic arch protrusion, and a mandibular angle width, and the facial appearance features may include at least one of an eyebrow feature, a nose feature, and a mouth feature. The hairstyle can include the aspects of hairstyle length, hairstyle style, and the like.
Referring to fig. 3, the method flow includes:
in step S301, a face image of a target user is acquired.
The face image of the target user is acquired, and may be a picture of the front face and/or the side face of the user. Specifically, prompting a user to move the head to a shooting area, and acquiring a front face picture of the user by a camera of the terminal; and then, the user rotates the head leftwards and rightwards according to the prompt, and the camera of the terminal collects the side face picture of the user, so that the face image of the target user is obtained, and the 3D model of the target user can be generated according to the collected image picture information subsequently.
More specifically, the embodiment may acquire a face picture taken by the user through a camera of the terminal device, or acquire a face picture directly uploaded by the user, where the face picture generally includes a complete front face five sense organ region, a side cheek region, ears, and the like, and may include a front face picture and/or a side face picture and the like. The terminal device can be a mobile phone, a tablet and the like. The shooting through the camera can be that the video is shot through the camera firstly, and then pictures in the video are captured; or, the picture is taken directly through the camera. The camera can be a 3D camera or a 2D camera, the three-dimensional space coordinates of each point in space in a visual field can be collected through the 3D camera, and three-dimensional stereo imaging can be obtained through restoration of an algorithm; while a 2D camera typically acquires two-dimensional spatial coordinates, i.e., (x, y) coordinates, of each point in the image.
It should be further noted that one front face picture and two side face pictures uploaded by the user may also be directly obtained to obtain the face image of the target user.
In step S302, a 3D model of the target user is generated from the face image.
Generating a target user 3D model from the face image may generate the target user 3D model from pictures of the user's front face and/or side faces.
After the user face image of the user is obtained, a related 3D face recognition algorithm can be adopted to generate a target user 3D model according to the key feature points of the face. The related 3D face recognition algorithm may be, for example, an algorithm based on image features, an algorithm based on model variable parameters, or an algorithm based on deep learning, which is commonly used in the related art, and the disclosure is not limited thereto. The key feature points of the human face may include, but are not limited to, feature points such as contour points of eyes, pupils, nose, mouth, ears, eyebrows, and parts of the human face. Taking an algorithm based on model variable parameters as an example, the algorithm combines the 3D deformation of a general human face model and the matrix iteration minimum based on distance mapping to recover the head pose and the 3D human face. And continuously updating the attitude parameters along with the change of the incidence relation of the model deformation, and repeating the process until the minimized dimension meets the requirement. The algorithm based on the variable parameters of the model is the most different from the algorithm based on the image characteristics in that: the coordinates of the feature points need to be searched again after the face pose changes once, and the coordinates of the feature points need to be adjusted only by the former method.
In step S303, a face shape of the 3D model of the target user is determined, and a hair style length matching the face shape of the user is determined according to a mapping relationship between the preset face shape and the hair style length.
The embodiment can determine the face shape according to the face length-width ratio, the zygomatic arch lower jaw ratio, the variance of the facial contour points and other data.
The present embodiment may set the face shape to several types in advance, for example, the face shape may be divided into 7 types: melon seed face, pear-shaped face, square face, baby face, long face, standard face, mixed face.
The present embodiment also makes it possible to classify the length of the hairstyle in advance into several types, for example, the hairstyle can be classified into short hair, medium hair and long hair. It should be noted that it is also possible to subdivide the hair into 3 main types of short, medium and long hair.
The matching of the face shape and the hairstyle generally considers the length of the hairstyle, namely the length of the hairstyle, for example, a long face and a square face are not suitable for selecting a hairstyle with short hair, the face length is visually lengthened, and a hairstyle with long hair is selected. The preset mapping relationship between the face shape and the hair style in this embodiment is shown in the following table, but is not limited thereto:
serial number Face shape Matched hairstyle length
1 Melon seed face ultra-A flush short hair, refreshing corn scald, moderate-length hair straightening
2 Pear-shaped face High-grade S roll with short hair of first love, small cherry blossom
3 Square face Korean is short hair, first love head and middle clavicle hair
4 Baby face Ancient wool roll, French large c roll and French large roll
5 Long face Naughty Bobo head, ancient wool roll and pear flower iron
6 Standard face Elegant lob head, and universal texture ironing
7 Mixed face Sinzhi bud head, classic soft roll, elegant sluggish
TABLE 1
In step S304, a facial contour feature of the 3D model of the target user is determined, and a hair style pattern is determined according to a mapping relationship between a preset facial contour feature and the hair style pattern.
The embodiment may determine the facial contour feature by referring to the detected facial contour feature key point information according to the detection of the set region of the user's face, and set the facial contour feature tag for reference.
For example, the determined facial contour features may include: sunken temple, convex zygomatic arch, wide mandibular angle, etc. Referring to fig. 4, it is illustrated how the temple depression is determined. For example, a plurality of mark points may be formed on the human face, the mark points enclose a face contour line, and if the contour line at the temple is concave, the mark points indicate that the temple is concave.
The zygomatic bones and the zygomatic arches are located outside the middle of the face, are important supports and bony landmarks of the face, and protect the cranium and facial muscles. The cheekbones are single bones located outside and below the orbit, in a diamond shape, forming the bony prominences of the cheek. The zygomatic arch is composed of two bones, and the zygomatic arch formed by the temporal zygomatic process and the temporal zygomatic process is positioned on both sides of the craniofacial bones and is in an outward bow shape. Generally, too high cheekbones affect facial complexion values, and zygomatic arch prominence affects facial complexion values.
The mandibular aspect is generally characterized by bony hypertrophy of the mandibular corner or excessive development of the masseter muscle protruding from the cheek areas, which partially affects facial appearance.
The main purpose of the facial contour features is to provide a reference to whether a hair style modification is required. Aiming at the zygomatic arch protrusion, the matched hairstyle style pattern is mainly modified by a hair shielding or fluffy pattern, for example, the hairstyle can be a Liuhai pattern and modified by perm, and the hairstyle is also matched on the volume of the perm, so that the problem of the zygomatic arch protrusion can be solved. For temple depression, the matched hairstyle pattern is mainly in a shading mode, and for example, the hairstyle pattern can be modified by a bang style. For the mandibular angle width, the modification can be made with the appropriate bang style.
Wherein, the bang style that provides can be subdivided into again and divide into ordinary bang, oblique bang, air bang, pseudo-ginseng and divide bang and eight characters bang etc. lets the user have more selections in the detail, promotes user experience. For example, the air bang can effectively solve the problems of over-high hairline, depression of temples and the like; the panax notoginseng is divided into the bang and has natural inner buckling radian, so that the face shape is well modified, particularly the problems of depression of temples and protrusion of zygomatic arches are modified, and the young and broken hair of the tail of the hair is more prominent in youth activity; the splayed curl design of the splayed Liuhai can effectively modify the facial contour, highlight delicate five sense organs and effectively modify the problems of narrow forehead, sunken temple and convex zygomatic arch.
The mapping relationship between the preset facial contour features and the hair style in the present embodiment is shown in the following table, but is not limited thereto:
serial number Facial contour features Hairstyle pattern
1 Zygomatic arch protrusion Adopt bang style and adopt to perm
2 Depression of temple In bang style
3 Width of angle of mandible In bang style
TABLE 2
Taking the hair style as an example of short hair, the short hair style can be subdivided into different style types by combining the hair style with the hair style. For example: the fine and short hair wave is a hair style which can be tried by melon seed faces, the short hair exposed out of ears can highlight five sense organs of a human body, and the whole sense of touch is fresh and clean; the fine bang can make the whole style not monotonous, and the effect of the top hair being fluffy can be more airy. Another example is: the type of the short-hair of the air quality can soften the edges and corners of the face, so the brush is light and concise to clean and is better in delicacy; the Liu Hai divided by the pseudo-ginseng has natural inward buckling radian, so that the face shape can be well modified, and particularly the temple depression and the zygomatic arch protrusion can be modified. Another example is: the hair straightening is realized by growing the hair in the middle, the natural feeling of the hair straightening is kept, the hair tails can be properly trimmed to form layers, and the hair tails are matched with melon seed faces to achieve a better god range; the middle-long Liu Hai often falls along the face shape naturally, and can effectively prevent the problems of temple depression and zygomatic arch protrusion.
In step S305, the facial feature of the 3D model of the target user is determined, and the hair style pattern is determined according to the preset mapping relationship between the facial feature and the hair style pattern.
The facial appearance feature may be an eyebrow-type feature, a nose-type feature, or a mouth-type feature, among others. The nasal type may generally include narrow nose, middle nose, broad nose, and the like. The mouth shape may generally include standard lip shapes, thin lips, thick lips, and the like.
In the embodiment, the facial feature is an eyebrow-shaped feature, but not limited to the eyebrow-shaped feature, and the corresponding hairstyle pattern can be matched according to the eyebrow-shaped feature. The embodiment can determine the eyebrow shape characteristic according to the detection of the key characteristic points of the eyebrow area of the user. Wherein the brow-type feature may comprise at least one or a combination of the following: eyebrow shape, eyebrow color, and eyebrow depth.
The shape of the eyebrow can be generally classified into 3 types of flat eyebrow, raised eyebrow, and standard eyebrow, but is not limited thereto. For example, standard eyebrows are characterized by a relatively non-uniform face shape, the eyebrow front is generally two thirds of the whole eyebrow, the eyebrow head is on a vertical line from the inner canthus to the nose bridge, and the eyebrow bottom of the standard eyebrow is generally in a radian with the effect of two ends being light and the middle being deep. For example, the eyebrow picking is characterized by cleanly dropping the eyebrow tip and picking, and the color is also relatively heavy, the eyebrow tip is generally positioned at one half of the whole eyebrow, the eyebrow tip is positioned at 3 mm higher than the eyebrow muscle, and the eyebrow tail is positioned at the oblique line of the nose wing and the corner of the eye. For example, flat eyebrows feature a more consistent look and are more flat and easier to reach, with the brow front generally at one-half of the entire eyebrow, the brow head at a point on the perpendicular to the bridge of the nose, the brow tail at a 45 degree slope from the wing of the nose to the outer corner of the eye, and the brow tail slightly curved.
The main function of the eyebrow shape is also to provide a reference for the need for hairstyle modification.
The mapping relationship between the eyebrow style characteristics and the hair style preset in this embodiment is shown in the following table, but is not limited thereto:
serial number Eyebrow shape feature Hairstyle pattern
1 Eyebrow smoothing device In bang style
2 Eyebrow picking device Adopt bang style and adopt to perm
3 Standard eyebrow In bang style
TABLE 3
In step S306, the head shape of the 3D model of the target user is determined, and the hairstyle pattern is matched with the head shape.
The head features may also include a head shape; when matching the corresponding hairstyle according to the head features, the hairstyle pattern can be matched with reference to the head style at the same time. The classification of head types may generally include circular, oval, square, and the like. For example, if the head shape is square, it is generally not suitable for short hair. According to different head types, different hair style patterns can be selected, such as a permanent wave pattern or a bang style pattern, and the bang style can be selected from a common bang, an inclined bang, an air bang and the like.
In step S307, neck features and/or shoulder features of the 3D model of the target user are determined, while matching of the hairstyle is performed in combination with the neck features and/or shoulder features.
In addition to acquiring the head features, the present embodiment may also acquire the neck features and/or shoulder features of the 3D model of the target user; when matching the corresponding hairstyle according to the head features, the matching of the hairstyle can be performed with reference to the neck features and/or the shoulder features at the same time. The neck features may generally include a long neck shape, a short neck shape, and the like. The shoulder features may generally include a wide shoulder, a narrow shoulder, and the like. For example, the neck is long-neck shaped, and the contour of the back neck bottom of the hairstyle is also suitable to be a circular arc shape or a reverse slope shape reflecting the level according to the situation. For example, the shoulder is wide and suitable for long hair, and the shoulder is covered by hair, so that the shoulder can be small.
Therefore, in addition to considering the head characteristics, the present embodiment can reflect the stature characteristics of the user more comprehensively by considering the shoulder and neck characteristics at the same time, and in combination with the more comprehensive stature characteristics, the recommendation of the hairstyle can be more accurate.
It should be noted that step S304, step S305, step S306, and step S307 have no inevitable order relationship therebetween.
In step S308, the freckle feature and/or the acne mark feature of the 3D model of the target user is determined, and the hair color of the hairstyle is determined according to the preset mapping relationship between the freckle feature, the acne mark feature and the hair color.
Freckles are yellowish brown, punctate pigmentation spots on the facial skin, typically round, oval or irregular. The acne marks are usually the marks left after the face has been treated with Changchun acne, and the acne marks are divided into deep marks and shallow marks. The pox marks are also divided into a plurality of types, and generally comprise black pox marks, red pox marks, pit pox marks and the like.
The present embodiment can determine whether or not freckles or acne marks are contained based on the detection of the face area of the user. The information of the freckle characteristics may include at least one of: lightness information, color information, shape information, size information, density information, and the like.
The mapping relationship between the freckle characteristic, the acne mark characteristic and the hair color preset in the embodiment is shown in the following table, but is not limited to the following table:
serial number Freckle and acne mark characteristics Color development
1 With freckle Chocolate color, caramel chocolate color
2 Acne mark Dark brown, cold brown
TABLE 4
It should be noted that there is no inevitable order relationship among step S304, step S305, step S306, step S307, and step S308.
In step S309, a pupil color of the 3D model of the target user is determined, and a hair color of the hairstyle is determined according to a preset mapping relationship between the pupil color and the hair color.
In the embodiment, the pupil color of the user can be determined by obtaining the key feature points, the color values and the brightness values of the eye regions of the 3D model of the target user and performing color analysis according to a set algorithm, and different hair colors can be recommended according to different pupil colors.
The key feature points may include, for example, feature points such as white eyes, eyeballs, irises, and the like. Wherein, the iris can adjust the size of the pupil, the inner part contains pigment, the pigment amount is the most, generally, the eye is black; the pigment is more in quantity and is generally brown eyes; the least amount of pigment, typically blue eyes. Acquiring images of the feature points, wherein the images of the feature points can be expressed by pixels, and various colors can be described by calculating the average value of RGB (Red, Green, Blue, Red, Green, Blue) of each pixel. Specifically, the color of each region of the eye can be calculated from the RGB average value of each pixel, and the color of the pupil can be finally determined from the calculated color of each region of the eye.
The hair color for determining the hairstyle may be mainly the pupil color matching hair color, and the matching of the hair color may be further performed in combination with the freckle feature and/or the acne mark feature.
The setting algorithm may be, for example, a color feature extraction algorithm, but is not limited thereto. The color feature extraction algorithm may again be, for example, a color histogram method or a wavelet-based block image color feature extraction method. Taking the color histogram method as an example, the color histogram is a color feature widely adopted in many image retrieval systems. It describes the proportion of different colors in the whole image, and does not care about the spatial position of each color, i.e. cannot describe the object or object in the image. When calculating the color histogram, the color space is divided into several small color bins, each of which becomes a bin of the histogram, and this process is called color quantization. A color histogram may then be obtained by calculating the number of pixels whose colors fall within each cell.
The embodiment can identify and analyze the pupil color through the color information near the eye key feature point left _ eye _ center point.
The pupil color in this embodiment may include a plurality of colors: black, brown, amber, grey, blue, green, silver, hazelnut color, off-color, center off-color, and the like.
The hair color in this embodiment can include a variety of hair colors, such as black, dark brown, lightest linen, highest ranked imperial concubine linen, matte linen gray, chocolate color, warm caramel color, honey brown, tea brown, cold brown, warm brown, champagne gold, milk gray, haze blue, and the like. Wherein, the black tea color is that the black has some tea colors; caramel color is the effect of a combination of brown and other colors; the brown molasses is a brown sugar with high proportion and a little dark brown color in sunlight.
Generally, the pupil color of a warm color matches the color developed for the warm color; the pupil color of the cold color is matched with the color development of the corresponding color.
The preset mapping relationship between pupil color and color in this embodiment is shown in the following table, but not limited thereto:
Figure BDA0002507172910000161
TABLE 5
The step of matching the hair color in step S309 and the preceding steps of matching the hair style length or the hair style pattern are not necessarily in order, and are not necessarily in order with step S308, and may be performed separately and independently.
In step S310, the matched hairstyle and hair color are displayed on the target user 3D model.
The embodiment can remove part of the original hair style of the 3D model of the target user, and display the matched hair style and hair color for the user to refer to. It should be noted that, if the matched hair style and hair color are two or more, the embodiment may switch the hair style and the hair color of the 3D model of the target user, and the switching manner includes automatic switching or manual switching according to the selection of the user. Through the setting of the switching mode, the target user can screen and compare various hair styles and hair colors, more selection suggestions are provided for the target user, the target user determines a final result according to the preference of the target user, and the experience and the functionality are improved.
In the embodiment, a plurality of hot and classic hairstyles can be provided, so that the user can switch the hairstyle at will. Furthermore, corresponding hair management methods and hair cutting key points can be matched for different hairstyles. In addition, the video transmission system also has the functions of storing videos, forwarding and storing paintings.
It should be noted that, in this embodiment, various operations such as enlarging, reducing, and rotating the target user 3D model matching the hair style and the hair color may also be performed, so that the hair style and the hair color may be more visually displayed from different angles. It should be noted that the eyebrow shape feature may not be considered in this embodiment.
According to the scheme of the embodiment, various characteristics of the user can be integrated, for example, the recommendation of the hair style and the hair color can be carried out according to the pupil color and the head characteristics, wherein the head characteristics can comprise a head shape, a face shape, facial contour characteristics (such as temporal concavity, zygomatic arch convexity and mandibular angle width), facial appearance characteristics (such as eyebrow shape characteristics, nose shape characteristics and mouth shape characteristics) and the like, so that the hair style and the hair color which are most suitable for each user can be provided for each user, more accurate recommendation reference opinions can be provided for the user, and the user experience is improved.
Corresponding to the embodiment of the application function implementation method, the application also provides a 3D model generation device, equipment and a corresponding embodiment.
Fig. 5 is a schematic structural diagram of a 3D model generation apparatus according to an embodiment of the present application.
Referring to fig. 5, the 3D model generation apparatus 50 includes: an image acquisition module 51, a model generation module 52, a hair color matching module 53, a hair style matching module 54 and a hair style display module 55.
And an image obtaining module 51, configured to obtain a face image of the target user. The image obtaining module 51 obtains a face image of a target user, which may be a picture of a front face and/or a side face of the user.
And a model generating module 52, configured to generate a 3D model of the target user according to the face image acquired by the image acquiring module 51. Model generation module 52 may generate a target user 3D model from the acquired pictures of the user's front face and/or side faces.
And a color matching module 53, configured to acquire a pupil color of the 3D model of the target user generated by the model generation module 52, and match a corresponding color according to the pupil color. The color matching module 53 may determine the pupil color of the user by obtaining key feature points, color values, and brightness values of the eye region of the target user 3D model and performing color analysis according to a set algorithm. After the pupil color of the user is determined, the corresponding color is matched according to the mapping relation between the pupil color and the color. The setting algorithm may be, for example, a color feature extraction algorithm, but is not limited thereto. The color feature extraction algorithm may be, for example, a color histogram method or a wavelet-based block image color feature extraction method, wherein a specific operation manner related to color feature extraction may be implemented by various means, which is not described in detail herein.
And the hair style matching module 54 is configured to obtain the head feature of the 3D model of the target user generated by the model generating module 52, and match the corresponding hair style according to the head feature. The head features of the 3D model of the target user include a face shape and facial features, and the hairstyle matching module 54 may match the corresponding hairstyle length according to the face shape and match the corresponding hairstyle style pattern according to the facial features.
And a hair style display module 55, configured to display the matched hair style and hair color on the 3D model of the target user generated by the model generation module 52. When the matched hair style and hair color are two or more, the hair style display module 55 may switch the hair style and hair color of the 3D model of the target user, where the switching manner includes automatic switching or manual switching according to the selection of the user.
Fig. 6 is another schematic structural diagram of a 3D model generation apparatus according to an embodiment of the present application.
Referring to fig. 6, the 3D model generation apparatus 50 includes: an image acquisition module 51, a model generation module 52, a hair color matching module 53, a hair style matching module 54 and a hair style display module 55.
The functions of the image obtaining module 51, the model generating module 52, the hair color matching module 53, the hair style matching module 54, and the hair style displaying module 55 can be referred to the description in fig. 5, and are not described herein again.
The hair style matching module 54 may include: a face processing sub-module 541 and a facial feature processing sub-module 542.
A face processing submodule 541, configured to match a corresponding hair style length according to a face in the head feature. In the present embodiment, the face shape may be set to several types in advance, for example, the face shape may be divided into 7 types: melon seed face, pear-shaped face, square face, baby face, long face, standard face, mixed face. In the present embodiment, the length of the hairstyle can be divided into several types, for example, the hairstyle can be divided into short hair, medium hair and long hair. It should be noted that it is also possible to subdivide the hair into 3 main types of short, medium and long hair.
And the facial feature processing sub-module 542 is used for matching the corresponding hair style pattern according to the facial features in the head features. The facial features may include facial contour features and facial appearance features, the facial contour features may include at least one of temporal concavity, zygomatic arch convexity, and mandibular angle width, and the facial appearance features may include brow-type features, nose-type features, mouth-type features, or the like.
The hair style matching module 54 may further include: a head shape processing sub-module 543.
And the head type processing sub-module 543 is used for matching the corresponding hair style according to the head type in the head features. The classification of head types may generally include circular, oval, square, and the like. When matching the corresponding hairstyle style pattern according to the head features, the matching can be performed with reference to the head style at the same time. For example, if the head shape is square, it is generally not suitable for short hair and can match long hair. According to different head types, different hair style patterns can be selected, such as a permanent wave pattern or a bang style pattern, and the bang style can be selected from a common bang, an inclined bang, an air bang and the like.
It should be noted that the hair color matching module 53 can also determine the hair color of the hairstyle according to the preset mapping relationship between the freckle characteristic, the acne mark characteristic and the hair color. Freckles are yellowish brown, punctate pigmentation spots on the facial skin, typically round, oval or irregular. The acne marks are usually the marks left after the face has been treated with Changchun acne, and the acne marks are divided into deep marks and shallow marks. The pox marks are also divided into a plurality of types, and generally comprise black pox marks, red pox marks, pit pox marks and the like. In the case of freckles, the color can be generally selected from chocolate color, caramel chocolate color and the like but is not limited thereto; in the case of acne marks, the color can be generally selected from, but not limited to, dark brown, cool brown, and the like. In addition, when the hairstyle matching module 54 matches the corresponding hairstyle according to the head characteristics, the hairstyle matching can be performed by combining the neck characteristics and/or the shoulder characteristics.
In this embodiment, the hair color corresponding to the pupil color may be matched, the hair style length corresponding to the face shape of the user may be matched, and the corresponding hair style pattern may be matched according to the facial features. Furthermore, the corresponding hairstyle length can be matched according to the face shape of the user, and the corresponding hairstyle pattern can be matched according to the head shape, the facial contour characteristic, the facial appearance characteristic, the neck characteristic, the shoulder characteristic and the like. Thus, more characteristics of the user can be comprehensively considered, and the recommended hairstyle and hair color are more accurate.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may be a mobile terminal device but is not limited thereto.
Referring to fig. 7, the electronic device 1000 includes a memory 1010 and a processor 1020.
The Processor 1020 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1010 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are needed by the processor 1020 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 1010 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, among others. In some embodiments, memory 1010 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 1010 has stored thereon executable code that, when processed by the processor 1020, may cause the processor 1020 to perform some or all of the methods described above.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that the acts and modules referred to in the specification are not necessarily required in the present application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform some or all of the various steps of the above-described methods in accordance with the present application.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the applications disclosed herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. A method of generating a 3D model, comprising:
acquiring a face image of a target user;
generating a target user 3D model according to the face image;
acquiring the pupil color of the 3D model of the target user, and matching the corresponding color according to the pupil color;
acquiring the head characteristics of the 3D model of the target user, and matching a corresponding hair style according to the head characteristics;
displaying the matched hairstyle and hair color on the target user 3D model.
2. The method of claim 1, wherein the head features include a face shape and facial features;
the step of matching a corresponding hairstyle according to the head features comprises:
and matching the corresponding hairstyle length according to the face shape, and matching the corresponding hairstyle pattern according to the facial features.
3. The method of claim 2, wherein:
the facial features include: facial contour features and facial appearance features;
the facial contour features comprise at least one of a temple depression, a zygomatic arch protrusion and a mandibular angle width;
the facial appearance feature includes at least one of an eyebrow feature, a nose feature, and a mouth feature.
4. The method of claim 2, wherein:
the head feature further comprises a head shape;
the step of matching a corresponding hairstyle according to the head features comprises:
and matching the corresponding hairstyle length according to the face shape, and matching the corresponding hairstyle style according to the facial features and the head shape.
5. The method of claim 1, wherein:
the method further comprises the following steps: acquiring neck features and/or shoulder features of the target user 3D model;
when the corresponding hairstyle is matched according to the head characteristics, the matching of the hairstyle is simultaneously carried out by combining the neck characteristics and/or the shoulder characteristics.
6. The method of claim 1, wherein:
the method further comprises the following steps: acquiring freckle characteristics and/or acne mark characteristics of the 3D model face of the target user;
and when the corresponding hair color is matched according to the pupil color, simultaneously combining the freckle characteristic and/or the acne mark characteristic to match the hair color.
7. The method of claim 1, wherein the obtaining the pupil color of the 3D model of the target user comprises:
the pupil color of the target user is determined by obtaining key feature points, color values and brightness values of the eye area of the 3D model of the target user and performing color analysis according to a set algorithm.
8. The method according to any one of claims 1 to 7, wherein the displaying the matched hairstyle and hair color on the target user 3D model comprises:
and when the matched hair style and color are two or more, switching the hair style and color of the 3D model of the target user, wherein the switching comprises automatic switching or manual switching according to the selection of the user.
9. A 3D model generation apparatus, comprising:
the image acquisition module is used for acquiring a face image of a target user;
the model generation module is used for generating a 3D model of the target user according to the face image acquired by the image acquisition module;
the color development matching module is used for acquiring the pupil color of the target user 3D model generated by the model generation module and matching the corresponding color development according to the pupil color;
the hairstyle matching module is used for acquiring the head characteristics of the target user 3D model generated by the model generation module and matching a corresponding hairstyle according to the head characteristics;
and the hair style display module is used for displaying the matched hair style and hair color on the target user 3D model generated by the model generation module.
10. The apparatus according to claim 9, wherein the hair style matching module comprises:
the face type processing submodule is used for matching the corresponding hairstyle length according to the face type in the head characteristics;
and the facial feature processing sub-module is used for matching the corresponding hair style pattern according to the facial features in the head features.
11. The apparatus according to claim 10, wherein the hair style matching module further comprises:
and the head type processing submodule is used for matching a corresponding hair style pattern according to the head type in the head characteristics.
12. An electronic device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-8.
13. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any one of claims 1-8.
CN202010449523.XA 2020-05-25 2020-05-25 3D model generation method, device and equipment Active CN113724366B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010449523.XA CN113724366B (en) 2020-05-25 2020-05-25 3D model generation method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010449523.XA CN113724366B (en) 2020-05-25 2020-05-25 3D model generation method, device and equipment

Publications (2)

Publication Number Publication Date
CN113724366A true CN113724366A (en) 2021-11-30
CN113724366B CN113724366B (en) 2024-02-27

Family

ID=78671601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010449523.XA Active CN113724366B (en) 2020-05-25 2020-05-25 3D model generation method, device and equipment

Country Status (1)

Country Link
CN (1) CN113724366B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020014844A (en) * 2000-07-18 2002-02-27 최창석 Three dimensional face modeling method
US20070058858A1 (en) * 2005-09-09 2007-03-15 Michael Harville Method and system for recommending a product based upon skin color estimated from an image
US20070292020A1 (en) * 2006-06-01 2007-12-20 Carlo Guidi Method of selecting the harmonious hair coloration suited to a person
JP2011101823A (en) * 2011-02-14 2011-05-26 Kao Corp Hairstyle advice method
TW201301147A (en) * 2011-06-16 2013-01-01 Jian-Wen Peng An intelligent system and algorithms of facial analysis and hairstyle fitting
CN105069180A (en) * 2015-06-19 2015-11-18 上海卓易科技股份有限公司 Hair style design method and system
CN107545051A (en) * 2017-08-23 2018-01-05 武汉理工大学 Hair style design system and method based on image procossing
CN107808136A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and computer equipment
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
KR101959871B1 (en) * 2017-12-20 2019-03-19 이해분 Apparatus and method for providing hair style recommendation information
KR20190116052A (en) * 2018-04-03 2019-10-14 고려대학교 산학협력단 Deep learning based personal color diagnosis and virtual make-up method and apparatus
CN110580484A (en) * 2018-06-07 2019-12-17 郭小平 Intelligent hairstyle matching system
CN111161035A (en) * 2019-12-31 2020-05-15 北京三快在线科技有限公司 Dish recommendation method and device, server, electronic equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020014844A (en) * 2000-07-18 2002-02-27 최창석 Three dimensional face modeling method
US20070058858A1 (en) * 2005-09-09 2007-03-15 Michael Harville Method and system for recommending a product based upon skin color estimated from an image
US20070292020A1 (en) * 2006-06-01 2007-12-20 Carlo Guidi Method of selecting the harmonious hair coloration suited to a person
JP2011101823A (en) * 2011-02-14 2011-05-26 Kao Corp Hairstyle advice method
TW201301147A (en) * 2011-06-16 2013-01-01 Jian-Wen Peng An intelligent system and algorithms of facial analysis and hairstyle fitting
CN105069180A (en) * 2015-06-19 2015-11-18 上海卓易科技股份有限公司 Hair style design method and system
CN107545051A (en) * 2017-08-23 2018-01-05 武汉理工大学 Hair style design system and method based on image procossing
CN107808136A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, readable storage medium storing program for executing and computer equipment
KR101959871B1 (en) * 2017-12-20 2019-03-19 이해분 Apparatus and method for providing hair style recommendation information
CN108305146A (en) * 2018-01-30 2018-07-20 杨太立 A kind of hair style recommendation method and system based on image recognition
KR20190116052A (en) * 2018-04-03 2019-10-14 고려대학교 산학협력단 Deep learning based personal color diagnosis and virtual make-up method and apparatus
CN110580484A (en) * 2018-06-07 2019-12-17 郭小平 Intelligent hairstyle matching system
CN111161035A (en) * 2019-12-31 2020-05-15 北京三快在线科技有限公司 Dish recommendation method and device, server, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113724366B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
KR102241153B1 (en) Method, apparatus, and system generating 3d avartar from 2d image
KR102045695B1 (en) Facial image processing method and apparatus, and storage medium
CN112784773B (en) Image processing method and device, storage medium and terminal
CN101055647B (en) Method and device for processing image
CN109952594B (en) Image processing method, device, terminal and storage medium
US20190130652A1 (en) Control method, controller, smart mirror, and computer readable storage medium
CN108171789B (en) Virtual image generation method and system
CN108447017A (en) Face virtual face-lifting method and device
CN109690617A (en) System and method for digital vanity mirror
JP5949331B2 (en) Image generating apparatus, image generating method, and program
CN109671016B (en) face model generation method and device, storage medium and terminal
CN109801380A (en) A kind of method, apparatus of virtual fitting, storage medium and computer equipment
CN108510500B (en) Method and system for processing hair image layer of virtual character image based on human face skin color detection
WO2020177434A1 (en) Image processing method and apparatus, image device, and storage medium
CA2667526A1 (en) Method and device for the virtual simulation of a sequence of video images
CN105405157A (en) Portrait Generating Device And Portrait Generating Method
US10512321B2 (en) Methods, systems and instruments for creating partial model of a head for use in hair transplantation
KR101987189B1 (en) Makeup recommendation kiosk
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
CN114187166A (en) Image processing method, intelligent terminal and storage medium
CN108833772A (en) Taking pictures based on depth camera guides system and method
CN113724366B (en) 3D model generation method, device and equipment
CN114972014A (en) Image processing method and device and electronic equipment
CN114972601A (en) Model generation method, face rendering device and electronic equipment
FR2920938A1 (en) Image simulating method for beauty industry, involves deforming parametric models to adapt to contours of features on face, and detecting and analyzing cutaneous structure of areas of face by maximizing gradient flow of brightness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant