CN110910487B - Construction method, construction device, electronic device, and computer-readable storage medium - Google Patents

Construction method, construction device, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN110910487B
CN110910487B CN201811088482.5A CN201811088482A CN110910487B CN 110910487 B CN110910487 B CN 110910487B CN 201811088482 A CN201811088482 A CN 201811088482A CN 110910487 B CN110910487 B CN 110910487B
Authority
CN
China
Prior art keywords
hairstyle
image
model
color
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811088482.5A
Other languages
Chinese (zh)
Other versions
CN110910487A (en
Inventor
林俊国
阎法典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811088482.5A priority Critical patent/CN110910487B/en
Publication of CN110910487A publication Critical patent/CN110910487A/en
Application granted granted Critical
Publication of CN110910487B publication Critical patent/CN110910487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Abstract

The invention discloses a construction method, a construction device, an electronic device and a computer readable storage medium. The construction method comprises the following steps: acquiring depth images and color images of multiple frames of users at different angles; determining a face depth image and a face color image of the user according to the multi-frame depth image and the multi-frame color image; constructing a three-dimensional face model of the user according to the face depth image and the face color image; determining a hairstyle outline image and a hairstyle color image of the user according to the multi-frame depth image and the multi-frame color image; searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; and fusing the three-dimensional face model and the three-dimensional hairstyle model to construct a three-dimensional head model. The construction method of the embodiment of the invention can construct a more complete three-dimensional head model of the user based on less hairstyle information, improve the aesthetic property of the constructed three-dimensional head model and improve the use experience of the user.

Description

Construction method, construction device, electronic device, and computer-readable storage medium
Technical Field
The present invention relates to the field of three-dimensional modeling technology, and in particular, to a construction method, a construction apparatus, an electronic apparatus, and a computer readable storage medium.
Background
The existing method for constructing the three-dimensional model of the user head is difficult to accurately acquire the depth information of the hair due to too much hair details of people, and the three-dimensional model of the user head cannot be completely displayed even if the depth information of the hair is acquired, so that the three-dimensional model of the user head is not suitable for people to learn about the head and is not attractive.
Disclosure of Invention
Embodiments of the present invention provide a construction method, a construction apparatus, an electronic apparatus, and a computer-readable storage medium.
The method for constructing the three-dimensional head model of the user comprises the following steps: acquiring a plurality of frames of depth images and color images of different angles of the user; determining a face depth image and a face color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images; constructing a three-dimensional face model of the user according to the face depth image and the face color image; determining a hairstyle outline image and a hairstyle color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images; searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; fusing the three-dimensional facial model and the three-dimensional hairstyle model to construct the three-dimensional head model.
The device for constructing the three-dimensional head model of the user comprises an acquisition module, a first determination module, a first construction module, a second determination module, a second construction module and a fusion module. The acquisition module is used for acquiring a plurality of frames of depth images and color images of different angles of the user. The first determining module is used for determining a face depth image and a face color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images. The first construction module is used for constructing a three-dimensional face model of the user according to the face depth image and the face color image. The second determining module is used for determining a hairstyle outline image and a hairstyle color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images. The second construction module is used for searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user. The fusion module is used for fusing the three-dimensional face model and the three-dimensional hairstyle model to construct the three-dimensional head model.
The electronic device of the embodiment of the invention comprises a processor. The processor is configured to: acquiring a plurality of frames of depth images and color images of different angles of the user; determining a face depth image and a face color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images; constructing a three-dimensional face model of the user according to the face depth image and the face color image; determining a hairstyle outline image and a hairstyle color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images; searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; fusing the three-dimensional facial model and the three-dimensional hairstyle model to construct the three-dimensional head model.
An electronic device of an embodiment of the present invention includes one or more processors, memory, and one or more programs. Wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs including instructions for the control methods described above.
The computer readable storage medium of the embodiments of the present invention includes a computer program for use in conjunction with an electronic device, the computer program being executable by a processor to perform the control method described above.
According to the construction method, the construction device, the electronic device and the computer readable storage medium, the matched standard hairstyle model can be directly found in the hairstyle database according to the hairstyle outline and the hairstyle color of the user, and the standard hairstyle model is fused with the three-dimensional face model of the user to obtain the three-dimensional head model. Therefore, on one hand, the acquisition of the three-dimensional data of the head of the user is simple and convenient without acquiring the detailed information of the hairstyle of the user, and on the other hand, a more complete three-dimensional head model of the user can be constructed, the aesthetic property of the constructed three-dimensional head model is improved, and the use experience of the user is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method of constructing a three-dimensional head model according to some embodiments of the invention.
FIG. 2 is a block schematic diagram of a three-dimensional head model building apparatus according to some embodiments of the invention.
Fig. 3 is a schematic structural diagram of an electronic device according to some embodiments of the present invention.
FIG. 4 is a schematic view of a scenario of a method of constructing a three-dimensional head model according to some embodiments of the invention.
FIG. 5 is a flow chart of a method of constructing a three-dimensional head model according to some embodiments of the invention.
FIG. 6 is a flow chart of a method of constructing a three-dimensional head model according to some embodiments of the invention.
FIG. 7 is a block diagram of a second determination module of a three-dimensional head model building apparatus according to some embodiments of the present invention.
FIG. 8 is a block schematic diagram of a fusion unit of a three-dimensional head model building apparatus according to some embodiments of the present invention.
FIG. 9 is a flow chart of a method of constructing a three-dimensional head model according to some embodiments of the invention.
FIG. 10 is a block diagram of a second build module of the build apparatus for three-dimensional head model of certain embodiments of the invention.
FIG. 11 is a flow chart of a method of constructing a three-dimensional head model according to some embodiments of the invention.
FIG. 12 is a block diagram of a second build module of the build apparatus for three-dimensional head model of certain embodiments of the invention.
FIG. 13 is a flow chart of a method of constructing a three-dimensional head model according to some embodiments of the invention.
FIG. 14 is a block diagram of an apparatus for constructing a three-dimensional head model according to some embodiments of the invention.
FIG. 15 is a schematic diagram of the connection of an electronic device with a computer readable storage medium according to some embodiments of the invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
Referring to fig. 1, the present invention provides a method for constructing a three-dimensional head model of a user. The construction method comprises the following steps:
01: acquiring depth images and color images of multiple frames of users at different angles;
02: determining a face depth image and a face color image of the user according to the multi-frame depth image and the multi-frame color image;
03: constructing a three-dimensional face model of the user according to the face depth image and the face color image;
04: determining a hairstyle outline image and a hairstyle color image of the user according to the multi-frame depth image and the multi-frame color image;
05: searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; and
06: and fusing the three-dimensional face model and the three-dimensional hairstyle model to construct a three-dimensional head model.
Referring to fig. 2, the present invention further provides a device 10 for constructing a three-dimensional head model of a user. The building device 10 comprises an acquisition module 11, a first determination module 12, a first building module 13, a second determination module 14, a second building module 15 and a fusion module 16. Step 01 may be implemented by the acquisition module 11. Step 02 may be implemented by the first determination module 12. Step 03 may be implemented by the first building block 13. Step 04 may be implemented by the second determination module 14. Step 05 may be implemented by the second building block 15. Step 06 may be implemented by fusion module 16.
That is, the acquisition module 11 may be used to acquire depth images and color images of different angles of a multi-frame user. The first determination module 12 may be configured to determine a facial depth image and a facial color image of the user from the plurality of depth images and the plurality of color images. The first construction module 13 may be used to construct a three-dimensional facial model of the user from the facial depth image and the facial color image. The second determination module 14 may be configured to determine a hairstyle profile image and a hairstyle color image of the user from the multiple frames of depth images and the multiple frames of color images. The second construction module 15 may be used to find a standard hairstyle model matching the user's hairstyle in the hairstyle database from the hairstyle contour image and the hairstyle color image as a three-dimensional hairstyle model for the user. The fusion module 16 may be used to fuse the three-dimensional facial model and the three-dimensional hairstyle model to construct a three-dimensional head model.
Referring to fig. 3, the present invention further provides an electronic device 100. Build apparatus 10 may be applied to electronic apparatus 100. The electronic device 100 includes a processor 20. Step 01, step 02, step 03, step 04, step 05 and step 06 may all be implemented by the processor 20. That is, the processor 20 may be configured to acquire depth images and color images of a plurality of frames of different angles of the user, determine a face depth image and a face color image of the user from the plurality of frames of depth images and the plurality of frames of color images, construct a three-dimensional face model of the user from the face depth image and the face color image, determine a hairstyle contour image and a hairstyle color image of the user from the plurality of frames of depth images and the plurality of frames of color images, find a standard hairstyle model matching a hairstyle of the user from the hairstyle contour image and the hairstyle color image in a hairstyle database as a three-dimensional hairstyle model of the user, and fuse the three-dimensional face model and the three-dimensional hairstyle model to construct a three-dimensional head model.
The electronic device 100 may be a mobile phone, a tablet computer, an intelligent wearable device (intelligent watch, intelligent bracelet, intelligent glasses, intelligent helmet), an unmanned aerial vehicle, etc., which is not limited herein.
The depth image indicates depth information of a user and a scene in which the user is located. The depth image may be acquired in any of three ways:
(1) Binocular stereoscopic ranging: the electronic device 100 includes at least two cameras (e.g., an infrared camera, a visible camera 40, or two visible cameras 40) with a certain distance therebetween. The processor 20 controls the two cameras to work simultaneously to obtain two images, the processor 20 performs pixel point matching on the two images, and the depth of each pixel is calculated according to a matching result, so that a depth image is obtained;
(2) Structured light ranging: the electronic device 100 includes a structured light projector and an image collector. The structured light projector emits laser patterns, and the image collector collects the laser patterns modulated by the user to obtain laser images. The processor 20 calculates the offset of each pixel based on the laser image and the reference image, and further calculates the depth of each pixel, thereby obtaining a depth image.
(3) Time-of-flight ranging: the electronic device 100 includes a light emitter and a receiver. The light emitter emits laser light and the receiver receives the laser light reflected back by the user. The processor 20 acquires the emission time and the reception time of the laser light to calculate the time of flight of the laser light in space, and further calculates the depth of each pixel, thereby obtaining a depth image.
The color image indicates the user and the color information of the scene in which the user is located. The color image may be captured by the visible light camera 40.
Referring to fig. 3 and 4, the electronic device 100 includes a display 60 and a speaker 70. In capturing the depth image and the color image, the processor 20 controls the depth camera 50 and the visible light camera 40 to be simultaneously turned on to acquire the depth image and the color image, respectively. The display 60 or the speaker 70 may prompt the user to rotate the head so that the depth camera 50 and the visible light camera 40 may sequentially capture different angle depth images and color images of the user's head. For example, as shown in fig. 4, the display screen 60 displays left and right arrows to prompt the user to turn the head left and right, respectively, and the depth camera 50 and the visible light camera 40 may sequentially capture depth images and color images of the front view angle, the left view angle, and the right view angle of the user. Thus, each angle has a corresponding one-frame depth image and one-frame color image.
After the processor 20 obtains the multi-frame depth image and the multi-frame color image, the face color sub-image in each frame of color image is first identified based on the face recognition algorithm, and then the face color sub-image of each frame is corrected by using the depth information indicated by the depth image corresponding to each frame of color image. And then, determining a face depth sub-image corresponding to the face color sub-image in the depth image by utilizing the corresponding relation between the color image and each pixel of the depth image. And then, fusing a plurality of frames of facial depth sub-images to obtain a facial depth image of the user, and fusing a plurality of frames of facial color sub-images to obtain a facial color image of the user. And finally, constructing a three-dimensional point cloud of the face according to the depth information indicated by the face depth image, and rendering the three-dimensional point cloud according to the color information indicated by the face color image to obtain a complete three-dimensional face model.
The processor 20 then determines a hair style profile image and a hair style color image of the user based on the plurality of depth images and the plurality of color images. The memory 30 stores a hairstyle database composed of a plurality of standard hairstyle models of different sexes, different colors, different lengths, and different styles, each of which is a three-dimensional model. The processor 20 finds a standard hairstyle model in the hairstyle database that matches the contour features and color features in the user's hairstyle contour image and hairstyle color image and takes the standard hairstyle model as the user's three-dimensional hairstyle model.
Finally, the processor 20 fuses the three-dimensional facial model with the three-dimensional hairstyle model to construct a three-dimensional head model of the user and displays it on the display screen 60.
According to the method for constructing the three-dimensional head model, the construction device 10 and the electronic device 100, the matched standard hairstyle model can be directly found in the hairstyle database according to the hairstyle outline and the hairstyle color of the user, and the standard hairstyle model is fused with the three-dimensional face model of the user to obtain the three-dimensional head model. Therefore, on one hand, the acquisition of the three-dimensional data of the head of the user is simple and convenient without acquiring the detailed information of the hairstyle of the user, and on the other hand, a more complete three-dimensional head model of the user can be constructed, the aesthetic property of the constructed three-dimensional head model is improved, and the use experience of the user is improved.
Referring to fig. 5 and 6 together, in some embodiments, determining the hairstyle contour image and the hairstyle color image of the user according to the multi-frame depth image and the multi-frame color image in step 04 includes:
041: extracting a head depth image and a head color image according to each frame of depth image and the color image of the corresponding depth image;
042: recognizing a human face in each frame of head color image to extract an initial color sub-image;
043: correcting the initial hairstyle color sub-image according to the head depth image corresponding to the head color image to obtain a hairstyle color sub-image;
044: determining a hairstyle outline sub-image according to the hairstyle color sub-image and the head depth image;
045: and respectively fusing the multi-frame hairstyle outline sub-image and the multi-frame hairstyle color sub-image to obtain a hairstyle outline image and a hairstyle color image.
Wherein step 045 further comprises:
0451: marking at least one fusion characteristic point in each frame of hairstyle color sub-image;
0452: splicing multi-frame hairstyle color sub-images according to the fusion characteristic points to obtain a hairstyle color image; and
0453: and splicing multiple frames of hairstyle contour sub-images according to the corresponding relation between the hairstyle contour sub-images and the hairstyle color sub-images and the fusion characteristic points to obtain the hairstyle contour images.
Referring to fig. 7 and 8 together, in some embodiments, the second determining module 14 includes an extracting unit 141, an identifying unit 142, a correcting unit 143, a determining unit 144, and a fusing unit 145. The fusion unit 145 includes an annotation subunit 1451, a first splice subunit 1452, and a second splice subunit 1453. Step 041 may be implemented by extraction unit 141. Step 042 may be implemented by the identification unit 142. Step 043 may be implemented by the correction unit 143. Step 044 may be implemented by the determination unit 144. Step 045 may be implemented by the fusion unit 145. Step 0451 may be implemented by labeling subunit 1451. Step 0452 may be implemented by the first stitching subunit 1452. Step 0453 may be implemented by the second stitching subunit 1453.
That is, the extraction unit 141 may be used to extract a head depth image and a head color image from each frame of the depth image and the color image of the corresponding depth image. The recognition unit 142 may be configured to recognize a face in the head color image of each frame to extract an initial color sub-image. The correction unit 143 may be configured to correct the initial hairstyle color sub-image according to the head depth image corresponding to the head color image to obtain the hairstyle color sub-image. The determining unit 144 may be used to determine a hairstyle contour sub-image from the hairstyle color sub-image and the head depth image. The fusing unit 145 may be configured to fuse the multiple frames of hairstyle contour sub-images and the multiple frames of hairstyle color sub-images to obtain a hairstyle contour image and a hairstyle color image, respectively. The labeling subunit 1451 may be configured to label at least one fused feature point in each frame of the hairstyle color sub-image. The first stitching subunit 1452 may be configured to stitch the multiple frames of hairstyle color sub-images according to the fused feature points to obtain a hairstyle color image. The second stitching subunit 1453 may be configured to stitch multiple frames of hair style contour sub-images according to the corresponding relationship between the hair style contour sub-images and the hair style color sub-images and the fusion feature points to obtain a hair style contour image.
Referring back to FIG. 3, in some embodiments, steps 041, 042, 043, 044, and 045 may all be implemented by processor 20. Step 0451, step 0452, step 0453 may also be implemented by the processor 20. That is, the processor 20 may be configured to extract a head depth image and a head color image from each frame of the depth image and the color image of the corresponding depth image, identify a face in each frame of the head color image to extract an initial hair style color sub-image, correct the initial hair style color sub-image to obtain a hair style color sub-image based on the head depth image corresponding to the head color image, determine a hair style contour sub-image based on the hair style color sub-image and the head depth image, and blend a plurality of frames of hair style contour sub-images and a plurality of frames of hair style color sub-images, respectively, to obtain the hair style contour image and the hair style color image. The processor 20 specifically performs the operations of marking at least one fused feature point in each frame of the hairstyle color sub-image, stitching multiple frames of hairstyle color sub-images according to the fused feature points to obtain a hairstyle color image, and stitching multiple frames of hairstyle contour sub-images according to the corresponding relationship between the hairstyle contour sub-images and the hairstyle color sub-images and the fused feature points to obtain a hairstyle contour image when executing step 045.
Specifically, the processor 20 first divides the depth foreground region and the depth background region according to the depth information in the depth image of each frame, for example, sets a depth range, classifies pixels whose depth information falls into the depth range as the depth foreground region, and the remaining pixels are classified as the depth background region. The depth foreground region may be referred to as a head depth image. Wherein the depth range may be one or more. When the depth range is plural, the processor 20 first divides the depth image into plural regions, each region sets a depth range, and pixels in each region whose depth information falls within the depth range corresponding to the region are classified as depth foreground regions. Then, a color foreground region corresponding to the depth foreground region is determined in the color image as a head color image according to the correspondence of the depth image and the color image. Subsequently, the processor 20 recognizes the face in the head color image according to the face recognition algorithm, and removes the pixels of the face portion to obtain an initial color sub-image. Subsequently, the processor 20 corrects the initial styling sub-image according to the depth information of the head depth image, for example, the processor 20 divides the departure portion and the residual background portion according to the color in the initial styling sub-image, and corrects the edge of the hairstyle portion according to the depth information of the head depth image to obtain the final initial styling sub-image. The processor 20 then determines a hair style contour sub-image corresponding to the corrected initial hair style sub-image from the head depth image based on the correspondence of the head depth image and the head color image. In this manner, the processor 20 performs the above-described processing on each pair of associated depth images and color images to finally obtain a multi-frame one-to-one correspondence of the hairstyle contour sub-image and the hairstyle color sub-image.
Finally, the processor 20 fuses the multiple frames of hairstyle contour sub-images to obtain a complete hairstyle contour image, and fuses the multiple frames of hairstyle color sub-images to obtain a complete hairstyle color image. Specifically, for example, the electronic device 100 acquires a depth image and a color image of three angles of left, middle and right, and the hairstyle contour sub-image has three frames, which are a left hairstyle contour sub-image, a middle hairstyle contour sub-image and a right hairstyle contour sub-image, respectively; the hairstyle color sub-image also has three frames, namely a left hairstyle color sub-image, a middle hairstyle color sub-image and a right hairstyle color sub-image. The processor 20 marks one or more fusion feature points in each frame of hairstyle color sub-image, for example marks the middle position point of the left temple and the forehead hairline in the left hairstyle color sub-image, marks the middle position point of the forehead hairline in the middle hairstyle color sub-image, marks the middle position point of the forehead hairline, the right temple and the like in the right hairstyle color sub-image, and then fuses the left hairstyle color sub-image and the middle hairstyle color sub-image according to the fusion feature point of the middle position point of the forehead hairstyle color sub-image, and the fused image of the left hairstyle color sub-image and the middle hairstyle color sub-image is fused with the right hairstyle color sub-image to obtain a final hairstyle color image. Likewise, the left hairstyle contour sub-image and the middle hairstyle contour sub-image can be fused according to the fusion relation of pixels between the left hairstyle color sub-image and the middle hairstyle color sub-image, and the fused image of the left hairstyle contour sub-image and the middle hairstyle contour sub-image is further fused with the right hairstyle contour sub-image to obtain a final hairstyle contour image. Wherein the hairstyle contour image indicates depth information of the hairstyle and the hairstyle color image indicates color information of the hairstyle.
Referring to fig. 9, in some embodiments, step 05 of searching a standard hairstyle model matching a hairstyle of a user in a hairstyle database based on the hairstyle contour image and the hairstyle color image as a three-dimensional hairstyle model of the user includes:
0511: constructing a first coordinate system according to the hairstyle contour image;
0512: marking at least one first matching characteristic point in the hairstyle outline image, and acquiring first coordinates of the first matching characteristic point;
0513: constructing a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model;
0514: marking at least one second matching characteristic point in each standard hairstyle model, and obtaining second coordinates of the second matching characteristic points, wherein the first matching characteristic points correspond to the second matching characteristic points one by one; and
0515: and calculating the Euclidean distance between the corresponding first coordinate and the corresponding second coordinate, wherein the standard hairstyle model corresponding to the minimum Euclidean distance is the three-dimensional hairstyle model.
Referring to fig. 10, in some embodiments, the second building module 15 includes a first building unit 1511, a first labeling unit 1512, a second building unit 1513, a second labeling unit 1514, and a first computing unit 1515. Step 0511 may be implemented by the first setup unit 1511. Step 0512 may be implemented by the first labeling unit 1512. Step 0513 may be implemented by the second setup unit 1513. Step 0514 may be implemented by the second labeling unit 1514. Step 0515 may be implemented by the first computing unit 1515.
That is, the first establishing unit 1511 may be used to construct a first coordinate system from the hairstyle contour image. The first labeling unit 1512 may be configured to label at least one first matching feature point in the departure profile image, and obtain a first coordinate of the first matching feature point. The second establishing unit 1513 may be configured to construct a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model. The second labeling unit 1514 may be configured to label at least one second matching feature point in each standard hairstyle model, and obtain a second coordinate of the second matching feature point, where the first matching feature point corresponds to the second matching feature point one by one. The first calculating unit 1515 may be configured to calculate a euclidean distance between the corresponding first coordinate and the corresponding second coordinate, where the standard hairstyle model corresponding to the minimum euclidean distance is the three-dimensional hairstyle model.
Referring back to fig. 3, in some embodiments, steps 0511, 0512, 0513, 0514, and 0515 may all be implemented by the processor 20. That is, the processor 20 may be further configured to construct a first coordinate system according to the hairstyle contour image, label at least one first matching feature point in the hairstyle contour image and obtain a first coordinate of the first matching feature point, construct a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model, label at least one second matching feature point in each standard hairstyle model and obtain a second coordinate of the second matching feature point, and calculate the euclidean distance between the corresponding first coordinate and the second coordinate, where the standard hairstyle model corresponding to the minimum euclidean distance is the three-dimensional hairstyle model.
The first matching feature point may be a middle position point of a left temple, a right temple, a forehead hairline, a hair tip, etc., and the second matching feature point may be a middle position point of a left temple, a right temple, a forehead hairline, a hair tip, etc.
Specifically, assuming that a first coordinate system is constructed by taking a middle position point of a forehead hairline as an origin, a first coordinate of each pixel point in the first coordinate system can be determined according to pixel coordinates and depth information of each point in the hairstyle contour image. Similarly, in each standard hairstyle model, a second coordinate system is built by taking the middle position point of the forehead hairline as the origin, and then the second coordinate of each pixel point in the second coordinate system can be determined according to the pixel coordinates and the depth information of each point in the standard hairstyle model. Subsequently, the first matching feature points of the left temple A1, the right temple B1, and the hair tip C1 are selected from the hairstyle contour image, and the first coordinate of A1 is (x A1 ,y A1 ,z A1 ) The first coordinate of B1 is (x B1 ,y B1 ,z B1 ) The first coordinate of C1 is (x C1 ,y C1 ,z C1 ) The method comprises the steps of carrying out a first treatment on the surface of the Selecting the left temple A2, the right temple B2 and the hair tip C2 of the second matching characteristic points from the standard hairstyle model, wherein the second coordinate of the A2 is (x A2 ,y A2 ,z A2 ) The second coordinate of B2 is (x B2 ,y B2 ,z B2 ) The second coordinate of C2 is (x C2 ,y C2 ,z C2 ). Subsequently, the processor 20 calculates the euclidean distance D1 between A1 and A2, the euclidean distance D2 between B1 and B2, and the euclidean distance D3 between C1 and C2, respectively, according to the following formula:
the processor 20 then sums the three euclidean distances, namely euclidean distance sum d=d1+d2+d3, one euclidean distance sum for each standard hairstyle model. And selecting a standard hairstyle model corresponding to the minimum D value from the Euclidean distances and the D as a three-dimensional hairstyle model of the user.
It can be understood that the euclidean distance and the minimum time indicate that the matching degree between the standard hairstyle model and the hairstyle outline image of the user is higher, so that the standard hairstyle model corresponding to the euclidean distance and the minimum time can be used as the three-dimensional hairstyle model of the user.
Referring to fig. 11, in some embodiments, step 05 of searching a standard hairstyle model matching a hairstyle of a user in a hairstyle database based on the hairstyle contour image and the hairstyle color image as a three-dimensional hairstyle model of the user includes:
0521: constructing a first coordinate system according to the hairstyle contour image;
0522: marking a plurality of first matching feature points in the hairstyle outline image, and acquiring first coordinates of each first matching feature point;
0523: calculating a first Euclidean distance between every two first coordinates;
0524: constructing a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model;
0525: marking a plurality of second matching feature points in each standard hairstyle model, and obtaining second coordinates of each second matching feature point, wherein the first matching feature points correspond to the second matching feature points one by one; and
0526: calculating a second Euclidean distance between every two second coordinates;
0527: calculating the difference value between each pair of corresponding first Euclidean distance and second Euclidean distance, and calculating the sum of absolute values of a plurality of difference values; and
0528: and taking the standard hairstyle model with the minimum sum of absolute values of the differences as a three-dimensional hairstyle model of the user.
Referring to fig. 12, in some embodiments, the second building block 15 includes a third building unit 1521, a third labeling unit 1522, a second computing unit 1523, a fourth building unit 1524, a fourth labeling unit 1525, a third computing unit 1526, a fourth computing unit 1527, and a selecting unit 1528. Step 0521 may be implemented by the third construction unit 1521. Step 0522 may be performed by the third labeling unit 1522. Step 0523 may be implemented by the second computing unit 1523. Step 0524 may be implemented by the fourth construction unit 1524. Step 0525 may be implemented by the fourth labeling unit 1525. Step 0526 may be implemented by the third computing unit 1526. Step 0526 may be implemented by the fourth computing unit 1527. Step 0527 may be implemented by the pick unit 1528.
That is, the third construction unit 1521 may be used to construct the first coordinate system from the hairstyle contour image. The third labeling unit 1522 may be configured to label a plurality of first matching feature points in the departure profile image, and obtain first coordinates of each first matching feature point. The second calculating unit 1523 may be configured to calculate a first euclidean distance between two first coordinates. The fourth construction unit 1524 may be configured to construct a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model. The fourth labeling unit 1525 may be configured to label a plurality of second matching feature points in each standard hairstyle model, and obtain second coordinates of each second matching feature point, where the first matching feature points are in one-to-one correspondence with the second matching feature points. The third calculation unit 1526 may be configured to calculate the second euclidean distance between every two second coordinates. The fourth calculating unit 1527 may be configured to calculate a difference between each pair of corresponding first euclidean distances and second euclidean distances, and calculate a sum of absolute values of the plurality of differences. The selection unit 1528 may be used to take as the three-dimensional hairstyle model of the user a standard hairstyle model with minimum sum of absolute values of differences.
Referring back to FIG. 3, in some embodiments, step 0521, step 0522, step 0523, step 0524, step 0525, step 0526, and step 0527 may all be implemented by processor 20. That is, the processor 20 may be further configured to construct a first coordinate system from the hairstyle contour image, to label a plurality of first matching feature points in the hairstyle contour image and obtain first coordinates of each first matching feature point, to calculate a first euclidean distance between every two first coordinates, to construct a second coordinate system corresponding to the standard hairstyle model from each standard hairstyle model, to label a plurality of second matching feature points in each standard hairstyle model and obtain second coordinates of each second matching feature point, to calculate a second euclidean distance between every two second coordinates, to calculate a difference between every pair of corresponding first euclidean distances and second euclidean distances and to calculate a sum of absolute values of the plurality of differences, and to take the standard hairstyle model with the sum of absolute values of the differences smallest as the three-dimensional hairstyle model of the user.
The first matching characteristic point may be a middle position point of a hairline of the left temple, the right temple, the forehead, a hairpin, etc., and the second matching characteristic point may also be a middle position point of a hairline of the left temple, the right temple, the forehead, a hairpin, etc.
Specifically, assuming that a first coordinate system is constructed by taking a middle position point of a forehead hairline as an origin, a first coordinate of each pixel point in the first coordinate system can be determined according to pixel coordinates and depth information of each point in the hairstyle contour image. Similarly, in each standard hairstyle model, a second coordinate system is built by taking the middle position point of the forehead hairline as the origin, and then the second coordinate of each pixel point in the second coordinate system can be determined according to the pixel coordinates and the depth information of each point in the standard hairstyle model. Subsequently, the first matching feature points of the left temple A1, the right temple B1, and the hair tip C1 are selected from the hairstyle contour image, and the first coordinate of A1 is (x A1 ,y A1 ,z A1 ) The first coordinate of B1 is (x B1 ,y B1 ,z B1 ) The first coordinate of C1 is (x C1 ,y C1 ,z C1 ) The method comprises the steps of carrying out a first treatment on the surface of the Selecting the left temple A2, the right temple B2 and the hair tip C2 of the second matching characteristic points from the standard hairstyle model, wherein the second coordinate of the A2 is (x A2 ,y A2 ,z A2 ) The second coordinate of B2 is (x B2 ,y B2 ,z B2 ) The second coordinate of C2 is (x C2 ,y C2 ,z C2 )。
Subsequently, the processor 20 follows the following formulaRespectively calculating a first Euclidean distance D between A1 and B1 A1-B1 First Euclidean distance D between A1 and C1 A1-C1 First Euclidean distance D between B1 and C1 B1-C1
The processor 20 calculates the second Euclidean distance D between A2 and B2 according to the following formula A2-B2 Second Euclidean distance D between A2 and C2 A2-C2 Second Euclidean distance D between B2 and C2 B2-C2
Subsequently, the processor 20 calculates a difference between each corresponding pair of the first euclidean distance and the second euclidean distance, for example:
first Euclidean distance D A1-B1 Distance D from the second European style A2-B2 Difference between V1, v1=d A1-B1 -D A2-B2
First Euclidean distance D A1-C1 Distance D from the second European style A2-C2 Difference V2, v2=d between A1-C1 -D A2-C2
First Euclidean distance D B1-C1 Distance D from the second European style B2-C2 Difference V3, v3=d between B1-C1 -D B2-C2
Subsequently, the processor 20 calculates the sum of absolute values of the plurality of differences V: v= |v1|+|v2|+|v3|.
Each standard hairstyle model corresponds to a sum of absolute differences V. The processor 20 selects the standard hairstyle model corresponding to the minimum value of V from the sum of absolute values of the differences V, namely the three-dimensional hairstyle model of the user.
It can be understood that when the sum of the absolute values of the differences is minimum, the matching degree between the standard hairstyle model and the hairstyle outline image of the user is higher, so that the standard hairstyle model corresponding to the case that the sum of the absolute values of the differences is minimum can be used as the three-dimensional hairstyle model of the user.
In some embodiments, there may be multiple standard hairstyle models of the same style in the hairstyle database, the multiple standard hairstyle models having different colors. At this time, the processor 20 may have a plurality of matched standard hairstyle models determined according to the methods of steps 0511 to 0515 or the methods of steps 0521 to 0528, the processor 20 needs to select one of the matched standard hairstyle models to be fused with the three-dimensional face model of the user, specifically, the processor 20 may select a standard hairstyle model having a color matching the color of the hairstyle indicated by the color hairstyle image from the matched standard hairstyle models according to the color of the hairstyle indicated by the color hairstyle image, and use the standard hairstyle model as the final three-dimensional hairstyle model. Therefore, the final three-dimensional hairstyle model is matched with the hairstyle of the user and the color of the hairstyle of the user, and the constructed three-dimensional head model is more matched with the actual image of the user.
Referring to fig. 13, in some embodiments, if there is only one standard hairstyle model of the same style in the hairstyle database, the method for constructing the three-dimensional head model of the user further includes, before step 06:
07: adjusting the color of the three-dimensional hairstyle model according to the hairstyle color image;
step 06 of fusing the three-dimensional facial model and the three-dimensional hairstyle model to construct a three-dimensional head model includes:
and fusing the three-dimensional face model and the three-dimensional hairstyle model after color adjustment to construct a three-dimensional head model.
Referring to fig. 14, in some embodiments, the build apparatus 10 further includes an adjustment module 17. Step 07 may be implemented by the adjustment module 17. Step 061 may be implemented by fusion module 16. That is, the adjustment module 17 may be used to adjust the color of the three-dimensional hairstyle model based on the hairstyle color image. The fusion module 16 may be used to fuse the three-dimensional facial model and the color-adjusted three-dimensional hairstyle model to construct a three-dimensional head model.
Referring back to fig. 3, in some embodiments, steps 07 and 061 may also be implemented by the processor 20. That is, the processor 20 may also be used to adjust the color of the three-dimensional hairstyle model based on the hairstyle color image, and to fuse the three-dimensional facial model and the color-adjusted three-dimensional hairstyle model to construct a three-dimensional head model.
It will be appreciated that when there is only one standard hair style model of the same style in the hair style database, the color of the standard hair style model selected to match the hair style profile image is fixed and does not necessarily match the color of the hair style in the hair style color image. Thus, the processor 20 adjusts the color of the selected standard hairstyle model according to the color of the hairstyle in the hairstyle color image.
In some embodiments, the color of the hair style extracted from the hair style color image may be varied, for example, the user wears a decoration such as a hair band, and the color of the hair style extracted from the hair style color image may be varied, so the processor 20 first divides the hair style color image into a plurality of color areas according to the color, then selects the color of the color area having the largest area as the color of the hair style in the hair style color image, and adjusts the color of the standard hair style model based on the selected color.
Thus, the color-adjusted standard hairstyle model has the same color as the hairstyle of the user. The processor 20 fuses the standard hairstyle model after color adjustment with the three-dimensional facial model of the user to obtain a three-dimensional head model which is more consistent with the image of the user, the three-dimensional head model is more real and attractive, and the user experience is better.
Referring to fig. 3, the present invention further provides an electronic device 100. The electronic device 100 includes a processor 20, a memory 30, and one or more programs. Wherein one or more programs are stored in memory 30 and configured to be executed by the one or more processors 20. The program includes instructions for performing the construction method according to any one of the above embodiments.
For example, referring to fig. 1, the program includes instructions for performing the steps of:
01: acquiring depth images and color images of multiple frames of users at different angles;
02: determining a face depth image and a face color image of the user according to the multi-frame depth image and the multi-frame color image;
03: constructing a three-dimensional face model of the user according to the face depth image and the face color image;
04: determining a hairstyle outline image and a hairstyle color image of the user according to the multi-frame depth image and the multi-frame color image;
05: searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; and
06: and fusing the three-dimensional face model and the three-dimensional hairstyle model to construct a three-dimensional head model.
For another example, referring to fig. 6, the program further includes instructions for performing the steps of:
0451: marking at least one fusion characteristic point in each frame of hairstyle color sub-image;
0452: splicing multi-frame hairstyle color sub-images according to the fusion characteristic points to obtain a hairstyle color image; and
0453: and splicing multiple frames of hairstyle contour sub-images according to the corresponding relation between the hairstyle contour sub-images and the hairstyle color sub-images and the fusion characteristic points to obtain the hairstyle contour images.
Referring to fig. 15, the present invention also provides a computer readable storage medium. The computer-readable storage medium includes a computer program for use in conjunction with the electronic device 100. The computer program is executable by the processor 20 to perform the construction method according to any of the embodiments described above.
For example, referring to FIG. 1, a computer program may be executed by the processor 20 to perform the steps of:
01: acquiring depth images and color images of multiple frames of users at different angles;
02: determining a face depth image and a face color image of the user according to the multi-frame depth image and the multi-frame color image;
03: constructing a three-dimensional face model of the user according to the face depth image and the face color image;
04: determining a hairstyle outline image and a hairstyle color image of the user according to the multi-frame depth image and the multi-frame color image;
05: searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; and
06: and fusing the three-dimensional face model and the three-dimensional hairstyle model to construct a three-dimensional head model.
For another example, referring to fig. 6, the computer program may also be executed by the processor 20 to perform the following steps:
0451: marking at least one fusion characteristic point in each frame of hairstyle color sub-image;
0452: splicing multi-frame hairstyle color sub-images according to the fusion characteristic points to obtain a hairstyle color image; and
0453: and splicing multiple frames of hairstyle contour sub-images according to the corresponding relation between the hairstyle contour sub-images and the hairstyle color sub-images and the fusion characteristic points to obtain the hairstyle contour images.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (13)

1. A method for constructing a three-dimensional head model of a user, the method comprising:
acquiring a plurality of frames of depth images and color images of different angles of the user;
determining a face depth image and a face color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images;
constructing a three-dimensional face model of the user according to the face depth image and the face color image;
Determining a hairstyle outline image and a hairstyle color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images;
searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; and
fusing the three-dimensional facial model and the three-dimensional hairstyle model to construct the three-dimensional head model;
the determining the hairstyle contour image and the hairstyle color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images comprises:
extracting a head depth image and a head color image from the depth image and the color image corresponding to the depth image for each frame;
recognizing a human face in the head color image of each frame to extract an initial hair style color sub-image;
correcting the initial hairstyle color sub-image according to the head depth image corresponding to the head color image to obtain a hairstyle color sub-image;
determining a hairstyle contour sub-image according to the hairstyle color sub-image and the head depth image;
and respectively fusing a plurality of frames of the hairstyle outline sub-images and a plurality of frames of the hairstyle color sub-images to obtain the hairstyle outline images and the hairstyle color images.
2. The method of constructing of claim 1, wherein said step of fusing a plurality of frames of said hairstyle contour sub-image and a plurality of frames of said hairstyle color sub-image to obtain said hairstyle contour image and said hairstyle color image, respectively, comprises:
marking at least one fusion characteristic point in the hairstyle color sub-image of each frame;
splicing a plurality of frames of hairstyle color sub-images according to the fusion characteristic points to obtain the hairstyle color image; and
and splicing a plurality of frames of hairstyle contour sub-images according to the corresponding relation between the hairstyle contour sub-images and the hairstyle color sub-images and the fusion characteristic points to obtain the hairstyle contour images.
3. The method of constructing according to claim 1, wherein the step of finding a standard hairstyle model matching the user's hairstyle in a hairstyle database from the hairstyle contour image and hairstyle color image as a three-dimensional hairstyle model of the user comprises:
constructing a first coordinate system according to the hairstyle contour image;
marking at least one first matching characteristic point in the hairstyle outline image, and acquiring a first coordinate of the first matching characteristic point;
Constructing a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model;
marking at least one second matching characteristic point in each standard hairstyle model, and obtaining second coordinates of the second matching characteristic points, wherein the first matching characteristic points correspond to the second matching characteristic points one by one; and
and calculating the Euclidean distance between the corresponding first coordinate and the corresponding second coordinate, wherein the standard hairstyle model corresponding to the minimum Euclidean distance is the three-dimensional hairstyle model.
4. The method of constructing according to claim 1, wherein the step of finding a standard hairstyle model matching the user's hairstyle in a hairstyle database from the hairstyle contour image and hairstyle color image as a three-dimensional hairstyle model of the user comprises:
constructing a first coordinate system according to the hairstyle contour image;
marking a plurality of first matching feature points in the hairstyle outline image, and acquiring first coordinates of each first matching feature point;
calculating a first Euclidean distance between every two first coordinates;
constructing a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model;
Marking a plurality of second matching feature points in each standard hairstyle model, and acquiring second coordinates of each second matching feature point, wherein the first matching feature points are in one-to-one correspondence with the second matching feature points; and
calculating a second Euclidean distance between every two second coordinates;
calculating the difference value between each pair of corresponding first Euclidean distance and second Euclidean distance, and calculating the sum of absolute values of a plurality of the difference values; and
and taking the standard hairstyle model with the minimum sum of absolute values of the difference values as a three-dimensional hairstyle model of the user.
5. The method of constructing according to claim 3 or 4, wherein after determining the three-dimensional hairstyle model of the user, the method of constructing further comprises:
adjusting the color of the three-dimensional hairstyle model according to the hairstyle color image;
the step of fusing the three-dimensional facial model and the three-dimensional hairstyle model to construct the three-dimensional head model includes:
and fusing the three-dimensional face model and the three-dimensional hairstyle model after color adjustment to construct the three-dimensional head model.
6. A construction apparatus of a three-dimensional head model of a user, the construction apparatus comprising:
The acquisition module is used for acquiring a plurality of frames of depth images and color images of different angles of the user;
a first determining module, configured to determine a facial depth image and a facial color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images;
a first construction module for constructing a three-dimensional face model of the user from the face depth image and the face color image;
the second determining module is used for determining a hairstyle outline image and a hairstyle color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images;
the second building module is used for searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; and
the fusion module is used for fusing the three-dimensional face model and the three-dimensional hairstyle model to construct the three-dimensional head model;
the second determining module comprises an extracting unit, an identifying unit, a correcting unit, a determining unit and a fusing unit;
The extraction unit is used for extracting a head depth image and a head color image according to the depth image of each frame and the color image corresponding to the depth image;
the identification unit is used for identifying the face in each frame of the head color image so as to extract an initial hair style color sub-image;
the correction unit is used for correcting the initial hairstyle color sub-image according to the head depth image corresponding to the head color image so as to obtain a hairstyle color sub-image;
the determining unit is used for determining a hairstyle outline sub-image according to the hairstyle color sub-image and the head depth image;
the fusion unit is used for respectively fusing a plurality of frames of hairstyle contour sub-images and a plurality of frames of hairstyle color sub-images to obtain the hairstyle contour images and the hairstyle color images.
7. An electronic device, the electronic device comprising a processor configured to:
acquiring depth images and color images of multiple frames of users at different angles;
determining a face depth image and a face color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images;
constructing a three-dimensional face model of the user according to the face depth image and the face color image;
Determining a hairstyle outline image and a hairstyle color image of the user according to a plurality of frames of the depth images and a plurality of frames of the color images;
searching a standard hairstyle model matched with the hairstyle of the user in a hairstyle database according to the hairstyle outline image and the hairstyle color image to serve as a three-dimensional hairstyle model of the user; and
fusing the three-dimensional face model and the three-dimensional hairstyle model to construct a three-dimensional head model;
the processor is further configured to:
extracting a head depth image and a head color image from the depth image and the color image corresponding to the depth image for each frame;
recognizing a human face in the head color image of each frame to extract an initial hair style color sub-image;
correcting the initial hairstyle color sub-image according to the head depth image corresponding to the head color image to obtain a hairstyle color sub-image;
determining a hairstyle contour sub-image according to the hairstyle color sub-image and the head depth image;
and respectively fusing a plurality of frames of the hairstyle outline sub-images and a plurality of frames of the hairstyle color sub-images to obtain the hairstyle outline images and the hairstyle color images.
8. The electronic device of claim 7, wherein the processor is further configured to:
Marking at least one fusion characteristic point in the hairstyle color sub-image of each frame;
splicing a plurality of frames of hairstyle color sub-images according to the fusion characteristic points to obtain the hairstyle color image; and
and splicing a plurality of frames of hairstyle contour sub-images according to the corresponding relation between the hairstyle contour sub-images and the hairstyle color sub-images and the fusion characteristic points to obtain the hairstyle contour images.
9. The electronic device of claim 7, wherein the processor is further configured to:
constructing a first coordinate system according to the hairstyle contour image;
marking at least one first matching characteristic point in the hairstyle outline image, and acquiring a first coordinate of the first matching characteristic point;
constructing a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model;
marking at least one second matching characteristic point in each standard hairstyle model, and obtaining second coordinates of the second matching characteristic points, wherein the first matching characteristic points correspond to the second matching characteristic points one by one; and
and calculating the Euclidean distance between the corresponding first coordinate and the corresponding second coordinate, wherein the standard hairstyle model corresponding to the minimum Euclidean distance is the three-dimensional hairstyle model.
10. The electronic device of claim 7, wherein the processor is further configured to:
constructing a first coordinate system according to the hairstyle contour image;
marking a plurality of first matching feature points in the hairstyle outline image, and acquiring first coordinates of each first matching feature point;
calculating a first Euclidean distance between every two first coordinates;
constructing a second coordinate system corresponding to the standard hairstyle model according to each standard hairstyle model;
marking a plurality of second matching feature points in each standard hairstyle model, and acquiring second coordinates of each second matching feature point, wherein the first matching feature points are in one-to-one correspondence with the second matching feature points; and
calculating a second Euclidean distance between every two second coordinates;
calculating the difference value between each pair of corresponding first Euclidean distance and second Euclidean distance, and calculating the sum of absolute values of a plurality of the difference values; and
and taking the standard hairstyle model with the minimum sum of absolute values of the difference values as a three-dimensional hairstyle model of the user.
11. The electronic device of claim 9 or 10, wherein the processor is further configured to:
Adjusting the color of the three-dimensional hairstyle model according to the hairstyle color image;
the step of fusing the three-dimensional facial model and the three-dimensional hairstyle model to construct a three-dimensional head model includes:
and fusing the three-dimensional face model and the three-dimensional hairstyle model after color adjustment to construct the three-dimensional head model.
12. An electronic device, the electronic device comprising:
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs comprising instructions for performing the build method of any of claims 1-5.
13. A computer readable storage medium comprising a computer program for use in connection with an electronic device, the computer program being executable by a processor to perform the construction method of any one of claims 1 to 5.
CN201811088482.5A 2018-09-18 2018-09-18 Construction method, construction device, electronic device, and computer-readable storage medium Active CN110910487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811088482.5A CN110910487B (en) 2018-09-18 2018-09-18 Construction method, construction device, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811088482.5A CN110910487B (en) 2018-09-18 2018-09-18 Construction method, construction device, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110910487A CN110910487A (en) 2020-03-24
CN110910487B true CN110910487B (en) 2023-07-25

Family

ID=69813531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811088482.5A Active CN110910487B (en) 2018-09-18 2018-09-18 Construction method, construction device, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110910487B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113116572B (en) * 2021-03-01 2022-03-08 北京联袂义齿技术有限公司 False tooth model forming system and forming method based on cloud computing
CN113570702A (en) * 2021-07-14 2021-10-29 Oppo广东移动通信有限公司 3D photo generation method and device, terminal and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000339498A (en) * 1999-05-31 2000-12-08 Minolta Co Ltd Three-dimensional shape data processor
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN103366400A (en) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 Method for automatically generating three-dimensional head portrait
CN104915981A (en) * 2015-05-08 2015-09-16 寇懿 Three-dimensional hairstyle design method based on somatosensory sensor
CN105144247A (en) * 2012-12-12 2015-12-09 微软技术许可有限责任公司 Generation of a three-dimensional representation of a user
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
CN108463823A (en) * 2016-11-24 2018-08-28 华为技术有限公司 A kind of method for reconstructing, device and the terminal of user's Hair model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000339498A (en) * 1999-05-31 2000-12-08 Minolta Co Ltd Three-dimensional shape data processor
CN102663820A (en) * 2012-04-28 2012-09-12 清华大学 Three-dimensional head model reconstruction method
CN105144247A (en) * 2012-12-12 2015-12-09 微软技术许可有限责任公司 Generation of a three-dimensional representation of a user
CN103366400A (en) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 Method for automatically generating three-dimensional head portrait
CN104915981A (en) * 2015-05-08 2015-09-16 寇懿 Three-dimensional hairstyle design method based on somatosensory sensor
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
CN108463823A (en) * 2016-11-24 2018-08-28 华为技术有限公司 A kind of method for reconstructing, device and the terminal of user's Hair model

Also Published As

Publication number Publication date
CN110910487A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN107818305B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN105938627B (en) Processing method and system for virtual shaping of human face
CN107273846B (en) Human body shape parameter determination method and device
CN110738595B (en) Picture processing method, device and equipment and computer storage medium
CN111754415B (en) Face image processing method and device, image equipment and storage medium
CN107852533A (en) Three-dimensional content generating means and its three-dimensional content generation method
CN114140867A (en) Eye pose recognition using eye features
CN106981078B (en) Sight line correction method and device, intelligent conference terminal and storage medium
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
US20170213085A1 (en) See-through smart glasses and see-through method thereof
KR20170019779A (en) Method and Apparatus for detection of 3D Face Model Using Portable Camera
JP2016516369A (en) Photo output method and apparatus
KR101510312B1 (en) 3D face-modeling device, system and method using Multiple cameras
CN104157001A (en) Method and device for drawing head caricature
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
KR20120127790A (en) Eye tracking system and method the same
CN104750933A (en) Eyeglass trying on method and system based on Internet
US10789784B2 (en) Image display method, electronic device, and non-transitory computer readable recording medium for quickly providing simulated two-dimensional head portrait as reference after plastic operation
CN110910487B (en) Construction method, construction device, electronic device, and computer-readable storage medium
EP3506149A1 (en) Method, system and computer program product for eye gaze direction estimation
CN109274883A (en) Posture antidote, device, terminal and storage medium
JP5103682B2 (en) Interactive signage system
JP6552266B2 (en) Image processing apparatus, image processing method, and program
US11615549B2 (en) Image processing system and image processing method
KR101165017B1 (en) 3d avatar creating system and method of controlling the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant