CN108463823B - Reconstruction method and device of user hair model and terminal - Google Patents

Reconstruction method and device of user hair model and terminal Download PDF

Info

Publication number
CN108463823B
CN108463823B CN201680060827.9A CN201680060827A CN108463823B CN 108463823 B CN108463823 B CN 108463823B CN 201680060827 A CN201680060827 A CN 201680060827A CN 108463823 B CN108463823 B CN 108463823B
Authority
CN
China
Prior art keywords
image
hair
face
region
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680060827.9A
Other languages
Chinese (zh)
Other versions
CN108463823A (en
Inventor
李阳
李江伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN108463823A publication Critical patent/CN108463823A/en
Application granted granted Critical
Publication of CN108463823B publication Critical patent/CN108463823B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

A method, a device and a terminal for reconstructing a user hair model can effectively reconstruct the hair model in a complex environment. The method comprises the following steps: acquiring a face front-view image of a reconstructed user; determining a hair region image of the reconstructed user according to the face front-view image; matching the hair region image with a three-dimensional (3D) hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair region image; and determining the 3D hair model closest to the hair region image as the 3D hair model of the reconstructed user.

Description

Reconstruction method and device of user hair model and terminal
Technical Field
The invention relates to the technical field of Three Dimensions (3D) modeling, in particular to a method, a device and a terminal for reconstructing a user hair model.
Background
With the continuous improvement of the performance of the terminal processor, the reconstruction of high-quality 3D virtual character characters based on character characters in the plane image is realized and is favored by various manufacturers and users. In the process of reconstructing the 3D virtual character role, the user hair model is accurately created, which plays an important role in the overall image of the character and can obviously enhance the reality of the reconstructed virtual character.
In creating a user hair model, the hair region of a person needs to be accurately identified. At present, a hair region identification method based on a hair color model is a mainstream method. Fig. 1 shows a schematic diagram of hair region identification based on a color model, as shown in fig. 1. The method mainly extracts a face area through a face recognition technology, counts hairstyles of various colors by using Red Green Black (RGB) information of a hair area included in the face area, and constructs a Gaussian Mixture Model (GMM) of the hair colors. And if the color range of the pixel of the face area falls into the constructed hair color model, identifying that the pixel belongs to the human hair area. By judging all pixels in the face area, a complete hair area can be obtained. The method is limited by hair samples, the identification effect is poor if the hair samples are incomplete, the method is also influenced by the surrounding environment, if the surrounding environment is complex and objects close to the hair color exist in the environment, a large amount of false identification can occur in the identification process, and if the hair of a person is dyed, the method can not be basically identified.
Another type of hair region recognition method based on machine learning is to label each face image as face, hair and background, train a large number of labeled face images by using a Convolutional Neural Network (CNN) model to obtain a recognition model of a hair region, and finally detect a hair region in a test image by using the trained model to realize recognition of the hair region, wherein the main flow is shown in fig. 2. The hair region identification method based on machine learning solves the problem that a large amount of false identifications are generated in the process of identifying the hair of a person in a complex environment by the hair region identification method based on a hair color model. However, the hair region identification method based on machine learning needs manual labeling of a large number of training images, and the model training time is very long, so the application range is very narrow.
Under some complicated environment, for example, if there is an object close to the hair color in the environment and the space is narrow, a large amount of misrecognition can occur in the hair region recognition process based on the hair color model, and the hair region recognition based on machine learning, because the mobility of the machine is poor, even the machine cannot be placed in a narrower space, so that the machine is not suitable for hair region recognition under the complicated environment, and then the human hair model cannot be reconstructed under the complicated environment. Therefore, in the prior art, the hair model can not be effectively reconstructed in a complex environment.
Disclosure of Invention
The embodiment of the invention provides a method, a device and a terminal for reconstructing a user hair model, which are used for effectively reconstructing the hair model in a complex environment.
In a first aspect, a method for reconstructing a user hair model is provided, where a hair region image of a reconstructed user is determined according to an obtained face front view image of the reconstructed user, the hair region image is matched with a 3D hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair region image, and the 3D hair model closest to the hair region image is determined as the 3D hair model of the reconstructed user.
In the scheme, a hair region image of the reconstructed user is determined in the obtained face front-view image of the reconstructed user, the hair region image is matched with a 3D hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair region image, and the 3D hair model closest to the hair region image is determined as the 3D hair model of the reconstructed user. The hair color modeling is not needed, so the hair color cannot be influenced in the identification process, and the error is small when the hair area is identified. Moreover, the face image does not need to be trained, a large amount of manual interaction and operation time is saved, and the detection time is short. Therefore, the method for reconstructing the user hair model in the embodiment of the invention can effectively reconstruct the hair model in a complex environment.
The face front-view image at least comprises a face area image, a hair area image and a background area image.
In one possible design, the reconstructed hair region image of the user may be determined from the face orthographic image by:
determining a first area image in the face front-view image, wherein the first area image comprises the face area image, the hair area image and part of the background area image;
identifying a part of the background area image in the first area image, and determining an image except the identified part of the background area image in the first area image as a second area image;
and recognizing the face region image in the second region image, and determining the images except the recognized face region image in the second region image as the hair region image.
In one possible design, the first region image may be determined in the face front view image by:
detecting human face feature points in the human face front-view image; and determining the face region image according to the face characteristic points, and determining a first frame region capable of covering the face region image in the face front-view image. And expanding the first frame image area by taking the first frame image area as a reference to obtain a second frame image area which can cover the face area image, the hair area image and the partial background area image. And taking the image in the second frame image area as a first area image.
In the above design, the face feature points are detected by a face detection algorithm, but the face detection algorithm used in the present invention is not limited, a face region image is determined by the detected face feature points, a first frame region capable of covering the face region image is determined according to the face region image, and a first region image including the face region image, the hair region image, and the partial background region image can be accurately obtained by enlarging the first frame region.
In one possible design, a part of the background region image may be identified in the first region image by:
determining foreground pixels and background pixels, the foreground pixels including pixels of eye feature points, pixels of nose feature points, and pixels of mouth feature points included in the face feature points. The background pixels comprise image pixels belonging to the face front view image and not belonging to the first region image. Determining foreground pixels and background pixels in the first area image according to the matching degree of the pixels of the first area image with the foreground pixels and the background pixels; and determining an image corresponding to the background pixel in the first area image as the partial background area image.
In one possible design, the face region image may be identified in the second region image by:
converting pixels included in the second region image to a hue saturation luminance HSV space. Extracting face skin color pixels from pixels included in the second region image according to values of the face skin color pixels on three components HSV, and determining a face skin color region according to the face skin color pixels. Determining a face contour region according to the face feature points; and determining the face region image in the second region image according to the face skin color region and the face contour region.
In the design, the face region image is determined according to the face skin color region and the face contour region, and compared with a method for determining the face region image only according to the face skin color region or the face contour region, the determined face region image is more accurate.
In one possible design, the hair region image may be matched with a 3D hair model stored in a hairstyle database in advance to obtain a 3D hair model closest to the hair region image as follows:
determining a feature descriptor of the hair region image, the feature descriptor characterizing spatial features of the hair region image; matching the feature description operator with a feature description operator corresponding to a 3D hair model stored in a hair style database in advance to obtain a feature description operator closest to the feature description operator in the hair style database; and determining the 3D hair model corresponding to the feature description operator closest to the feature description operator in the hair style database as the 3D hair model closest to the hair region image.
In the above design, after the image of the hair region is accurately determined, the image is matched with the hair style in the hair style database, and the real hair style information of the user can be obtained.
In one possible design, the feature descriptors of the hair region image may be determined by:
an inner contour and an outer contour of the hair region image are determined. And determining the midpoint of the horizontal connecting line of the two mouth corner characteristic points included in the face characteristic points, and taking the midpoint as the origin of the ray for carrying out omnibearing angle direction scanning on the hair area image. And recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour in each angular direction in the process of scanning the hair region image in all angular directions. And recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour as feature description operators of the hair region image.
Wherein the feature description operator characterizes spatial features of the hair region image.
In the above design, the method of obtaining the feature description operator of the hair region image by using the midpoint of the horizontal connection line of the two mouth corner feature points included in the face feature point as the origin of the ray for scanning the hair region image in the all-dimensional angular direction can more accurately determine the 3D hair model closest to the hair region image.
In a second aspect, there is provided a user hair model reconstruction apparatus, wherein the apparatus for reconstructing a user hair model has a function of implementing the user hair model reconstruction method in the first aspect, and the function may be implemented by hardware or by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions. The modules may be software and/or hardware.
In one possible design, the user hair model reconstruction apparatus includes an obtaining unit, a determining unit, a matching unit, and a processing unit, and the functions of the obtaining unit, the determining unit, the matching unit, and the processing unit may correspond to the steps of each method, which is not described herein again.
In a third aspect, the present invention further provides a terminal, including: an input device, a memory, a processor, a display screen, and a bus. The input device, the memory and the display screen are all connected with the processor through the bus. The input device is used for acquiring a face front-view image of the reconstructed user. The memory is used for storing the program executed by the processor, the face orthographic image of the reconstructed user acquired by the input device and the 3D hair model. The processor is configured to execute the program stored in the memory, and specifically execute the operation designed and executed by any one of the first aspect. The display screen is used for displaying the face orthographic image of the reconstructed user acquired by the input device and the 3D hair model of the reconstructed user determined by the processor.
In a fourth aspect, the present invention also provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the operations as set forth in any of the designs of the first aspect.
Drawings
Fig. 1 is a schematic diagram of a conventional hair color model-based hair region identification method;
FIG. 2 is a schematic diagram of a prior art hair region identification method based on machine learning;
FIG. 3 is a flowchart of a method for reconstructing a user hair model according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a front view image of a human face according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for determining an image of the hair region according to the face front view image according to an embodiment of the present invention;
fig. 6 is a flowchart of a method for determining a first area image in the face front view image according to the embodiment of the present invention;
fig. 7 is a schematic diagram of a feature point of a human face in an orthographic view of the human face according to an embodiment of the invention;
fig. 8 is a schematic diagram of a front view image of a human face including a first block diagram area according to an embodiment of the present invention;
fig. 9 is a schematic diagram illustrating a method for determining a second block diagram area according to an embodiment of the present invention;
fig. 10 is a flowchart of a method for identifying a part of the background region image in the first region image according to an embodiment of the present invention;
fig. 11 is a schematic diagram of identifying a part of the background region image in the first region image and removing the part to obtain a second region image according to an embodiment of the present invention;
fig. 12 is a flowchart of a method for recognizing a face region image in a second region image according to an embodiment of the present invention;
fig. 13 is a flowchart of a processing procedure for recognizing and removing the face region image in the second region image according to the embodiment of the present invention;
FIG. 14 is a flowchart of a method for matching hair region images according to an embodiment of the present invention;
FIG. 15 is a flowchart of a method for determining a feature descriptor for an image of a hair region according to an embodiment of the present invention;
FIG. 16 is a flowchart illustrating a process for matching images of a hair region according to an embodiment of the present invention;
fig. 17 is a schematic diagram illustrating a reconstruction effect of a user hair model according to an embodiment of the present invention;
FIG. 18 is a schematic view of a device for reconstructing a hair model of a user according to an embodiment of the present invention;
fig. 19 is a schematic view of another user hair model reconstruction device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments, and not all embodiments, of the present invention.
The embodiment of the invention provides a method, a device and a terminal for reconstructing a user hair model, which are used for effectively reconstructing the hair model in a complex environment. The method, the device and the terminal are based on the same inventive concept, and because the principles of solving the problems of the method, the device and the terminal are similar, the implementation of the terminal, the device and the method can be mutually referred, and repeated parts are not repeated.
In order to ensure that the hair model is effectively reconstructed in a complex environment, the embodiment of the invention provides a reconstruction method of a user hair model. According to the method, a hair area image of a reconstructed user is determined according to an obtained face front-view image of the reconstructed user, the hair area image is matched with a 3D hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair area image, and the 3D hair model closest to the hair area image is determined as the 3D hair model of the reconstructed user.
The method for reconstructing the user hair model provided by the embodiment of the present invention may be applied to a terminal with relatively low storage capacity and relatively low computing capacity, and may also be applied to an electronic device with relatively high storage capacity and relatively high computing capacity, which is not specifically limited in the present application. The following describes a method for reconstructing a user hair model according to an embodiment of the present invention, with a device for reconstructing a user hair model as an execution subject.
Fig. 3 is a flowchart illustrating a method for reconstructing a user hair model according to an embodiment of the present invention, as shown in fig. 3:
s101: the apparatus for reconstruction of a user's hair model acquires an orthographic image of a face of a reconstructed user.
The device for reconstructing the hair model of the user in the embodiment of the invention can be a terminal with an image acquisition function, and the terminal with the image acquisition function can acquire the face front-view image through image acquisition equipment (such as a camera). Of course, the embodiment of the present invention does not limit the specific implementation manner of obtaining the front view image of the human face, and for example, a picture including the front view image of the human face may be stored in advance in the device for reconstructing the hair model of the user.
S102: and determining the hair area image of the reconstructed user according to the face front-view image.
S103: and matching the hair region image with a 3D hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair region image.
S104: and determining the 3D hair model closest to the hair region image as the 3D hair model of the reconstructed user.
The face front-view image acquired by the reconstruction device of the user hair model at least comprises a face region image, a hair region image and a background region image.
The face region image related in the embodiment of the invention is a partial image containing a face region in the face front-view image, the hair region image is a partial image containing a hair region in the face front-view image, and the background region image is a partial image containing a background region in the face front-view image. For example, the face front view image in fig. 4 is divided into A, B and C three partial images, where the a region partial image is a background region image, the B region partial image is a face region image, and the C region partial image is a hair region image.
In the embodiment of the invention, the hair area image in the face front-view image of the reconstructed user is finally determined by identifying the face area image, the hair area image and the background area image in the face front-view image. Fig. 5 shows a flowchart of a method for determining the hair region image according to the face front view image, as shown in fig. 5:
s201: and determining a first area image in the face front-view image.
S202: and identifying part of the background area image in the first area image, and determining the image except the identified part of the background area image in the first area image as a second area image.
S203: and recognizing the face region image in the second region image, and determining the images except the recognized face region image in the second region image as the hair region image.
Wherein the first region image comprises the face region image, the hair region image and a part of the background region image
The embodiment of the invention can acquire the face area image, the hair area image and the partial background area image in the face front-view image by face recognition technology, and determine the first area image. Fig. 6 shows a flowchart of a method for determining a first region image in the face front view image, as shown in fig. 6:
s301: and detecting face characteristic points in the face front-view image.
In the embodiment of the invention, the human face characteristic points in the human face front-view image can be detected according to the human face detection algorithm, but the specific human face detection algorithm is not limited.
The face feature points related in the embodiment of the present invention include eye feature points, nose feature points, mouth feature points, eyebrow feature points, ear feature points, and face contour feature points. Fig. 7 is a schematic diagram of human face feature points in the front view image of a human face according to the present invention, and as shown in fig. 7, each number marked in the diagram corresponds to a human face feature point. In the face front view image, 40 personal face feature points from 0 to 39 are marked, wherein the face feature points from 0 to 9 are eye feature points, the face feature points from 10 to 18 are nose feature points, the face feature points from 19 to 24 are mouth feature points, the face feature points from 25 to 32 are eyebrow feature points, the face feature points from 33 to 39 are face contour feature points, and the face feature points from 36 to 39 are ear feature points.
S302: and determining the face region image according to the face characteristic points, and determining a first frame region capable of covering the face region image in the face front-view image.
In the embodiment of the invention, in order to segment the human image and the background image in the front-view image of the human face, a first frame region capable of covering the human face region image is determined according to the human face characteristic points. Fig. 8 shows an elevational view of a human face including a first frame region, as shown in fig. 8, in which a white frame is the first frame region determined according to the feature points of the human face.
S303: and expanding the first frame image area by taking the first frame image area as a reference to obtain a second frame image area which can cover the face area image, the hair area image and the partial background area image.
In the embodiment of the present invention, a first frame region determined according to a human face feature point cannot guarantee that a human face region image and a hair region image are completely included, and in order to achieve the purpose of removing a part of background images from a human face front-view image and retaining the human face region image and the hair region image in the present invention, the first frame region is expanded with reference to the first frame region to obtain a second frame region capable of covering the human face region image, the hair region image and the part of background region image. Fig. 9 is a schematic diagram illustrating a method for determining a second block diagram region, as shown in fig. 9. And taking the width of the first frame image area as a reference length, extending 0.2 reference lengths towards the left and right directions respectively, taking the length of the first frame image area as a reference length, extending 0.5 reference lengths upwards, and extending 0.3 reference lengths downwards, expanding the first frame image area, and ensuring that the redetermined second frame image area can comprise a complete face area image and a complete hair area image.
S304: and determining the image in the second frame image region as a first region image.
In the embodiment of the invention, a part of the background area image can be identified in the first area image by using an image identification technology. Fig. 10 shows a flowchart of a method for identifying a part of the background region image in the first region image according to the present invention, as shown in fig. 10:
s401: foreground pixels and background pixels are determined.
In this embodiment of the present invention, the foreground pixels include pixels of eye feature points, pixels of nose feature points, and pixels of mouth feature points included in the face feature points, and the background pixels include image pixels that belong to the face front view image and do not belong to the first region image. In the embodiment of the present invention, the pixels of the eye feature points indicated by 0 to 9, the pixels of the nose feature points indicated by 10 to 18, and the pixels of the mouth feature points indicated by 19 to 24 in fig. 7 may be determined as foreground pixels. And pixels outside the second frame area shown in fig. 9 are determined as background pixels.
S402: and determining foreground pixels and background pixels in the first area image according to the matching degree of the pixels of the first area image with the foreground pixels and the background pixels.
Wherein foreground pixels in the first region image correspond to the face region image and the hair region image, and background pixels in the first region image correspond to the partial background region image
In the invention, the pixels of the first area image can be processed by a Gaussian mixture model algorithm and a maximum flow/minimum cut algorithm, and the pixels of the first area image are determined as foreground pixels and background pixels.
S403: and determining an image corresponding to the background pixel in the first area image as the partial background area image.
Fig. 11 is a schematic diagram illustrating that a part of the background region image is identified and removed from the first region image to obtain a second region image according to an embodiment of the present invention. As shown in fig. 11, the background region image included in the first region image is removed, and a second region image including only the face region image and the hair region image is obtained.
In the embodiment of the present invention, after obtaining the second region image including the face region image and the hair region image, the face region image may be further identified in the second region image.
Fig. 12 is a flowchart illustrating a method for recognizing a face region image in a second region image according to an embodiment of the present invention, as shown in fig. 12:
s501: converting pixels included in the second region image into a Hue Saturation Value (HSV) space.
S502: extracting face skin color pixels from pixels included in the second region image according to values of the face skin color pixels on three components HSV, and determining a face skin color region according to the face skin color pixels.
S503: and determining a face contour region according to the face characteristic points.
In the embodiment of the present invention, face contour fitting is performed according to No. 33 to No. 39 face contour feature points among the face feature points shown in fig. 7, so as to obtain a face contour region in the second region.
S504: and determining the face region image in the second region image according to the face skin color region and the face contour region.
Fig. 13 is a flowchart illustrating a processing procedure of recognizing and removing the face region image in the second region image according to the present invention, as shown in fig. 13. Determining a face skin color area through face skin color pixels, determining a face contour area according to the face characteristic points, and determining and removing a face area image according to the face skin color area and the face contour area to obtain a hair area image. Compared with the method for determining the face region image only according to the face skin color region or the face contour region, the determined face region image is more accurate.
The method comprises the steps of determining a hair region image of a reconstructed user in an acquired face front-view image of the reconstructed user, matching the hair region image with a 3D hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair region image, and determining the 3D hair model closest to the hair region image as the 3D hair model of the reconstructed user. The hair color modeling is not needed, so the hair color cannot be influenced in the identification process, and the error is small when the hair area is identified. Moreover, the face image does not need to be trained, a large amount of manual interaction and operation time is saved, and the detection time is short. Therefore, the method for reconstructing the user hair model in the embodiment of the invention can effectively reconstruct the hair model in a complex environment.
In the embodiment of the invention, after the hair region image in the face front-view image is identified, the hair region image is matched with the 3D hair model stored in the hair style database in advance to obtain the 3D hair model closest to the hair region image, and the method can be used for a hair modeling model for hair layered modeling. Fig. 14 is a flowchart of a method for matching hair region images according to the present invention, as shown in fig. 14:
s601: determining a feature descriptor for the hair region image.
In the embodiment of the invention, the feature description operator represents the spatial features of the hair region image.
S602: and matching the characteristic description operator with a characteristic description operator corresponding to a 3D hair model stored in a hair style database in advance to obtain the characteristic description operator which is closest to the characteristic description operator in the hair style database.
S603: and determining the 3D hair model corresponding to the feature description operator closest to the feature description operator in the hair style database as the 3D hair model closest to the hair region image.
In the embodiment of the present invention, before the hair region image is matched with the 3D hair model stored in the hair style database in advance, a feature descriptor of the hair region image may be determined, and a specific implementation manner for determining the feature descriptor of the hair region image is provided below.
Fig. 15 is a flowchart of a method for determining a feature descriptor of the hair region image according to the present invention, as shown in fig. 15:
s701: an inner contour and an outer contour of the hair region image are determined.
In the embodiment of the present invention, the inner contour and the outer contour of the hair region image may be determined by performing binarization processing on the hair region image, and what method is specifically adopted to determine the inner contour and the outer contour of the hair region image is not specifically limited.
S702: and determining the midpoint of the horizontal connecting line of the two mouth corner characteristic points included in the face characteristic points, and taking the midpoint as the origin of the ray for carrying out omnibearing angle direction scanning on the hair area image.
S703: and recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour in each angular direction in the process of scanning the hair region image in all angular directions.
S704: and recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour as feature description operators of the hair region image.
Fig. 16 shows a processing flow chart for matching a hair region image according to the present invention, as shown in fig. 16, a midpoint of a horizontal connection line of two mouth corner feature points included in a human face feature point is used as a scanning origin, the hair region image is scanned in an all-directional angular direction to obtain a feature description operator of the hair region image, and the midpoint of the horizontal connection line of the two mouth corner feature points is used as the scanning origin to obtain the feature description operator, so that the accuracy is higher, the spatial features of the hair region image can be represented more accurately, and a 3D hair model obtained by the above design matching is closer to the real information of the hair region image of a user.
Fig. 17 is a schematic diagram of a reconstruction effect of a user hair model provided by the present invention, as shown in fig. 17. Fig. 17 is an implementation effect diagram for reconstructing a hair model according to the reconstruction method for a user hair model provided in fig. 3, in which a front view image of a human face is subjected to hierarchical processing to segment a hair region image, the hair region image is matched with a 3D hair model in a database, and the 3D hair model closest to the hair region image is determined as the 3D hair modeling model for hair hierarchical modeling.
It will be appreciated that the reconstruction means for a model of the user's hair, in order to carry out the above-mentioned functions, comprise corresponding hardware structures and/or software modules for performing the respective functions. The elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein may be embodied in hardware or in a combination of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present teachings.
The functional units of the device for reconstructing a user hair model according to the above method example can be divided, for example, the functional units can be divided corresponding to the functions, or two or more functions can be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of using integrated units, fig. 18 shows a schematic structural diagram of a reconstruction apparatus 1000 of a user hair model, and as shown in fig. 18, the reconstruction apparatus 1000 of a user hair model includes an obtaining unit 1001, a determining unit 1002, a matching unit 1003, and a processing unit 1004, where:
an acquisition unit 1001 configured to acquire a face orthographic view image of a reconstructed user;
a determining unit 1002, configured to determine a hair region image of the reconstructed user according to the face front view image acquired by the acquiring unit 1001;
a matching unit 1003, configured to match the hair region image determined by the determining unit 1002 with a 3D hair model pre-stored in a hair style database, to obtain a 3D hair model closest to the hair region image;
a processing unit 1004, configured to determine the 3D hair model closest to the hair region image matched by the matching unit 1003 as the 3D hair model of the reconstructed user.
The face front view image acquired by the acquiring unit 1001 at least includes a face region image, a hair region image, and a background region image.
In a possible implementation manner, the determining unit 1002 may determine the reconstructed hair region image of the user according to the face front view image acquired by the acquiring unit 1001 in the following manner:
determining a first area image in the face front-view image, wherein the first area image comprises the face area image, the hair area image and part of the background area image; identifying a part of the background area image in the first area image, and determining an image except the identified part of the background area image in the first area image as a second area image; and recognizing the face region image in the second region image, and determining the images except the recognized face region image in the second region image as the hair region image.
In a possible implementation manner, the determining unit 1002 determines the first region image in the face front view image by adopting the following manner:
and detecting face characteristic points in the face front-view image. And determining the face region image according to the face characteristic points, and determining a first frame region capable of covering the face region image in the face front-view image. And expanding the first frame image area by taking the first frame image area as a reference to obtain a second frame image area which can cover the face area image, the hair area image and the partial background area image. And taking the image in the second frame image area as a first area image.
In one possible implementation manner, the determining unit 1002 may identify a partial background area image in the first area image by:
determining foreground pixels and background pixels, wherein the foreground pixels comprise pixels of eye feature points, pixels of nose feature points and pixels of mouth feature points which are included in the face feature points, and the background pixels comprise image pixels which belong to the face front-view image and do not belong to the first area image; and obtaining foreground pixels and background pixels in the first area image according to the matching degree of the pixels of the first area image with the foreground pixels and the background pixels, and determining an image corresponding to the background pixels in the first area image as the partial background area image.
Foreground pixels in the first area image correspond to the face area image and the hair area image, and background pixels of the first area image correspond to the partial background area image.
In a possible implementation manner, the determining unit 1002 may identify the face region image in the second region image by:
converting pixels included in the second region image to HSV space; extracting face skin color pixels from pixels included in the second region image according to values of the face skin color pixels on three components HSV, and determining a face skin color region according to the face skin color pixels; determining a face contour region according to the face feature points; and determining the face region image in the second region image according to the face skin color region and the face contour region.
In a possible implementation manner, the matching unit 1003 matches the hair region image with a 3D hair model stored in a hair style database in advance, to obtain a 3D hair model closest to the hair region image, as follows:
determining a feature descriptor for the hair region image. Matching the feature description operator with a feature description operator corresponding to a 3D hair model stored in a hair style database in advance to obtain a feature description operator closest to the feature description operator in the hair style database; and determining the 3D hair model corresponding to the feature description operator closest to the feature description operator in the hair style database as the 3D hair model closest to the hair region image.
Wherein the feature description operator characterizes spatial features of the hair region image.
In one possible implementation manner, the matching unit 1003 determines a feature descriptor of the hair region image by using the following method:
determining an inner contour and an outer contour of the hair region image; determining the midpoint of the horizontal connecting line of the two mouth corner characteristic points included in the face characteristic points, and taking the midpoint as the origin of rays for scanning the hair region image in the omnibearing angle direction; recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour in each angular direction in the process of scanning the hair region image in all-directional angular directions; and recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour as feature description operators of the hair region image.
In the embodiment of the present invention, each functional unit may be integrated into one processor, may exist alone physically, or may be integrated into one unit by two or more units. The integrated unit can be realized in a form of hardware or a form of a software functional module.
When the integrated unit is implemented in the form of hardware, the determining unit 1002, the matching unit 1003 and the processing unit 1004 may be the processor 2001 in the physical hardware of the user hair model reconstruction apparatus, and the obtaining unit 1001 may be the input device 2003 in the physical hardware of the user hair model reconstruction apparatus, as shown in fig. 19. Fig. 19 is another schematic structural diagram of a reconstruction device of a user hair model, and the reconstruction device of the user hair model shown in fig. 19 may be a terminal. In the following, a reconstruction apparatus of a user hair model is taken as an example of a terminal, as shown in fig. 19, the terminal 2000 includes a processor 2001, and may further include a memory 2002 for storing a program executed by the processor 2001, a face front-view image of a reconstructed user acquired by the input device 2003, and a 3D hair model. The processor 2001 is configured to call the program stored in the memory 2002 and the face front view image of the reconstructed user stored in the memory 2002, determine the hair region image in the face front view image, match the hair region image with a 3D hair model stored in the memory in advance, obtain a 3D hair model closest to the hair region image, and determine the 3D hair model closest to the hair region image as the 3D hair model of the reconstructed user.
The memory 2002 may be a volatile memory (e.g., a random-access memory (RAM)); the memory 2002 may also be a non-volatile memory (e.g., a read-only memory (ROM)), a flash memory (flash memory), a hard disk (HDD) or a solid-state drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to this. The memory 2002 may be a combination of the above.
The above-mentioned input device 2003 included in the terminal 2000 is used for acquiring the face orthographic image of the reconstructed user and configuring the 3D hair model pre-stored in the hair style database. Where the configuration is completed and saved in memory 2002.
The terminal 2000 mentioned above may further include a display screen 2004 for displaying the face orthographic image of the reconstructed user acquired by the input device 2003 and the 3D hair model of the reconstructed user determined by the processor 2001.
The processor 2001, memory 2002, input device 2003 and display 2004 may be connected via a bus 2005, among other things. The connection between other components is merely illustrative and not intended to be limiting. The bus 2005 can be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 19, but it is not intended that there be only one bus or one type of bus.
The processor 2001 is configured to execute the program code stored in the memory 2002, and specifically performs the following operations:
acquiring a face front-view image of a reconstructed user; determining a hair region image of the reconstructed user according to the face front-view image; matching the hair region image with a three-dimensional (3D) hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair region image; and determining the 3D hair model closest to the hair region image as the 3D hair model of the reconstructed user.
The face front-view image at least comprises a face area image, a hair area image and a background area image.
The processor 2001 may determine the reconstructed user's hair region image from the face orthographic image by:
and determining a first area image in the face front-view image, wherein the first area image comprises the face area image, the hair area image and part of the background area image. And identifying part of the background area image in the first area image, and determining the image except the identified part of the background area image in the first area image as a second area image. And recognizing the face region image in the second region image, and determining the images except the recognized face region image in the second region image as the hair region image.
The processor 2001 may determine the first region image in the face front view image by:
detecting human face feature points in the human face front-view image; determining the face region image according to the face characteristic points, and determining a first frame region capable of covering the face region image in the face front-view image; enlarging the first frame image area by taking the first frame image area as a reference to obtain a second frame image area which can cover the face area image, the hair area image and the partial background area image; and taking the image in the second frame image area as a first area image.
The processor 2001 may identify a part of the background region image in the first region image by:
determining foreground pixels and background pixels, wherein the foreground pixels comprise pixels of eye feature points, pixels of nose feature points and pixels of mouth feature points which are included in the face feature points, and the background pixels comprise image pixels which belong to the face front-view image and do not belong to the first area image; and obtaining foreground pixels and background pixels in the first area image according to the matching degree of the pixels of the first area image with the foreground pixels and the background pixels, and determining an image corresponding to the background pixels in the first area image as the partial background area image.
Foreground pixels in the first area image correspond to the face area image and the hair area image, and background pixels of the first area image correspond to the partial background area image.
The processor 2001 may recognize the face region image in the second region image by:
converting pixels included in the second region image to HSV space; extracting face skin color pixels from pixels included in the second region image according to values of the face skin color pixels on three components HSV, and determining a face skin color region according to the face skin color pixels; determining a face contour region according to the face feature points; and determining the face region image in the second region image according to the face skin color region and the face contour region.
The processor 2001 may match the hair region image with a 3D hair model stored in advance in a hairstyle database to obtain a 3D hair model closest to the hair region image by:
determining a feature descriptor of the hair region image, the feature descriptor characterizing spatial features of the hair region image; matching the feature description operator with a feature description operator corresponding to a 3D hair model stored in a hair style database in advance to obtain a feature description operator closest to the feature description operator in the hair style database; and determining the 3D hair model corresponding to the feature description operator closest to the feature description operator in the hair style database as the 3D hair model closest to the hair region image.
The processor 2001 may determine the feature descriptor of the hair region image by:
determining an inner contour and an outer contour of the hair region image; determining the midpoint of the horizontal connecting line of the two mouth corner characteristic points included in the face characteristic points, and taking the midpoint as the origin of rays for scanning the hair region image in the omnibearing angle direction; recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour in each angular direction in the process of scanning the hair region image in all-directional angular directions; and recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour as feature description operators of the hair region image.
The method and the device for reconstructing the hair of the user determine the hair region image of the reconstructed user in the obtained face front view image of the reconstructed user, match the hair region image with the 3D hair model stored in the hair style database in advance to obtain the 3D hair model closest to the hair region image, and determine the 3D hair model closest to the hair region image as the 3D hair model of the reconstructed user. The hair color modeling is not needed, so the hair color cannot be influenced in the identification process, and the error is small when the hair area is identified. Moreover, the face image does not need to be trained, a large amount of manual interaction and operation time is saved, and the detection time is short. Therefore, the method for reconstructing the user hair model in the embodiment of the invention can effectively reconstruct the hair model in a complex environment.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (12)

1. A method of reconstructing a model of a user's hair, comprising:
acquiring a face front-view image of a reconstructed user; the face front-view image at least comprises a face region image, a hair region image and a background region image;
detecting human face feature points in the human face front-view image;
determining the face region image according to the face characteristic points, and determining a first frame region capable of covering the face region image in the face front-view image;
enlarging the first frame image area by taking the first frame image area as a reference to obtain a second frame image area which can cover the face area image, the hair area image and the partial background area image;
determining an image in the second frame image area as a first area image, wherein the first area image comprises the face area image, the hair area image and a part of the background area image;
identifying a part of the background area image in the first area image, and determining an image except the identified part of the background area image in the first area image as a second area image;
converting pixels included in the second region image to a hue saturation luminance (HSV) space;
extracting face skin color pixels from pixels included in the second region image according to values of the face skin color pixels on three components HSV, and determining a face skin color region according to the face skin color pixels;
determining a face contour region according to the face feature points;
determining the face region image in the second region image according to the face skin color region and the face contour region;
determining images except the identified face region image in the second region image as the hair region image;
matching the hair region image with a three-dimensional (3D) hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair region image;
and determining the 3D hair model closest to the hair region image as the 3D hair model of the reconstructed user.
2. The method of claim 1, wherein identifying a portion of the background region image in the first region image comprises:
determining foreground pixels and background pixels, wherein the foreground pixels comprise pixels of eye feature points, pixels of nose feature points and pixels of mouth feature points which are included in the face feature points, and the background pixels comprise image pixels which belong to the face front-view image and do not belong to the first area image;
determining foreground pixels and background pixels in the first area image according to the matching degree of the pixels of the first area image with the foreground pixels and the background pixels;
and determining an image corresponding to the background pixel in the first area image as the partial background area image.
3. The method according to any one of claims 1-2, wherein matching the hair region image with a pre-stored 3D hair model in a hair style database, resulting in a 3D hair model closest to the hair region image, comprises:
determining a feature descriptor of the hair region image, the feature descriptor characterizing spatial features of the hair region image;
matching the feature description operator with a feature description operator corresponding to a 3D hair model stored in a hair style database in advance to obtain a feature description operator closest to the feature description operator in the hair style database;
and determining the 3D hair model corresponding to the feature description operator closest to the feature description operator in the hair style database as the 3D hair model closest to the hair region image.
4. The method of claim 3, wherein determining a feature descriptor for the hair region image comprises:
determining an inner contour and an outer contour of the hair region image;
determining the midpoint of the horizontal connecting line of the two mouth corner characteristic points included in the face characteristic points, and taking the midpoint as the origin of rays for scanning the hair region image in the omnibearing angle direction;
recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour in each angular direction in the process of scanning the hair region image in all-directional angular directions;
and recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour as feature description operators of the hair region image.
5. A user hair model reconstruction device, comprising:
the acquisition unit is used for acquiring a face front-view image of the reconstructed user; the face front-view image at least comprises a face region image, a hair region image and a background region image;
the determining unit is used for detecting human face characteristic points in the human face front-view image; determining the face region image according to the face characteristic points, and determining a first frame region capable of covering the face region image in the face front-view image; enlarging the first frame image area by taking the first frame image area as a reference to obtain a second frame image area which can cover the face area image, the hair area image and the partial background area image; determining an image in the second frame image area as a first area image, wherein the first area image comprises the face area image, the hair area image and a part of the background area image;
identifying a part of the background area image in the first area image, and determining an image except the identified part of the background area image in the first area image as a second area image;
converting pixels included in the second region image to a hue saturation luminance (HSV) space; extracting face skin color pixels from pixels included in the second region image according to values of the face skin color pixels on three components HSV, and determining a face skin color region according to the face skin color pixels; determining a face contour region according to the face feature points; determining the face region image in the second region image according to the face skin color region and the face contour region; determining images except the identified face region image in the second region image as the hair region image;
the matching unit is used for matching the hair area image determined by the determining unit with a 3D hair model stored in a hair style database in advance to obtain a 3D hair model closest to the hair area image;
and the processing unit is used for determining the 3D hair model which is matched by the matching unit and is closest to the hair region image as the 3D hair model of the reconstructed user.
6. The apparatus according to claim 5, wherein the determination unit identifies a part of the background area image in the first area image by:
determining foreground pixels and background pixels, wherein the foreground pixels comprise pixels of eye feature points, pixels of nose feature points and pixels of mouth feature points which are included in the face feature points, and the background pixels comprise image pixels which belong to the face front-view image and do not belong to the first area image;
determining foreground pixels and background pixels in the first area image according to the matching degree of the pixels of the first area image with the foreground pixels and the background pixels;
and determining an image corresponding to the background pixel in the first area image as the partial background area image.
7. The apparatus according to any one of claims 5 to 6, wherein the matching unit matches the hair region image with a 3D hair model stored in advance in a hair style database in such a manner that a 3D hair model closest to the hair region image is obtained:
determining a feature descriptor of the hair region image, the feature descriptor characterizing spatial features of the hair region image;
matching the feature description operator with a feature description operator corresponding to a 3D hair model stored in a hair style database in advance to obtain a feature description operator closest to the feature description operator in the hair style database;
and determining the 3D hair model corresponding to the feature description operator closest to the feature description operator in the hair style database as the 3D hair model closest to the hair region image.
8. The apparatus of claim 7, wherein the matching unit determines the feature descriptors of the hair region image by:
determining an inner contour and an outer contour of the hair region image;
determining the midpoint of the horizontal connecting line of the two mouth corner characteristic points included in the face characteristic points, and taking the midpoint as the origin of rays for scanning the hair region image in the omnibearing angle direction;
recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour in each angular direction in the process of scanning the hair region image in all-directional angular directions;
and recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour as feature description operators of the hair region image.
9. A terminal comprising an input device, a memory, a processor, a display screen and a bus, wherein the input device, the memory and the display screen are all connected to the processor via the bus, wherein,
the input equipment is used for acquiring a face front-view image of the reconstructed user; the face front-view image at least comprises a face region image, a hair region image and a background region image;
the memory is used for storing the program executed by the processor, the face orthographic image of the reconstructed user acquired by the input device and the 3D hair model;
the processor is used for calling the program stored in the memory and the face front-view image of the reconstructed user stored in the memory and detecting the face characteristic points in the face front-view image; determining the face region image according to the face characteristic points, and determining a first frame region capable of covering the face region image in the face front-view image; enlarging the first frame image area by taking the first frame image area as a reference to obtain a second frame image area which can cover the face area image, the hair area image and the partial background area image; determining an image in the second frame image area as a first area image, wherein the first area image comprises the face area image, the hair area image and a part of the background area image; identifying a part of the background area image in the first area image, and determining an image except the identified part of the background area image in the first area image as a second area image; converting pixels included in the second region image into hue saturation brightness (HSV) space, extracting face skin color pixels from the pixels included in the second region image according to values of the face skin color pixels on three components of HSV, determining a face skin color region according to the face skin color pixels, determining a face contour region according to the face characteristic points, determining the face region image in the second region image according to the face skin color region and the face contour region, determining images except the identified face region image in the second region image as the hair region image, matching the hair region image with a 3D hair model prestored in the memory to obtain a 3D hair model closest to the hair region image, and matching the 3D hair model closest to the hair region image, determining a 3D hair model for the reconstructed user;
the display screen is used for displaying the face orthographic image of the reconstructed user acquired by the input device and the 3D hair model of the reconstructed user determined by the processor.
10. The terminal of claim 9, wherein the processor identifies a portion of the background region image in the first region image by:
determining foreground pixels and background pixels, wherein the foreground pixels comprise pixels of eye feature points, pixels of nose feature points and pixels of mouth feature points which are included in the face feature points, and the background pixels comprise image pixels which belong to the face front-view image and do not belong to the first area image;
determining foreground pixels and background pixels in the first area image according to the matching degree of the pixels of the first area image with the foreground pixels and the background pixels;
and determining an image corresponding to the background pixel in the first area image as the partial background area image.
11. A terminal according to any of claims 9 to 10, wherein the processor matches the hair region image with a pre-stored 3D hair model in a hair style database to obtain a 3D hair model closest to the hair region image by:
determining a feature descriptor of the hair region image, the feature descriptor characterizing spatial features of the hair region image;
matching the feature description operator with a feature description operator corresponding to a 3D hair model stored in a hair style database in advance to obtain a feature description operator closest to the feature description operator in the hair style database;
and determining the 3D hair model corresponding to the feature description operator closest to the feature description operator in the hair style database as the 3D hair model closest to the hair region image.
12. The terminal of claim 11, wherein the processor determines a feature descriptor for the hair region image by:
determining an inner contour and an outer contour of the hair region image;
determining the midpoint of the horizontal connecting line of the two mouth corner characteristic points included in the face characteristic points, and taking the midpoint as the origin of rays for scanning the hair region image in the omnibearing angle direction;
recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour in each angular direction in the process of scanning the hair region image in all-directional angular directions;
and recording the distance from the origin to the inner contour, the distance from the origin to the outer contour and the distance from the inner contour to the outer contour as feature description operators of the hair region image.
CN201680060827.9A 2016-11-24 2016-11-24 Reconstruction method and device of user hair model and terminal Active CN108463823B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/107121 WO2018094653A1 (en) 2016-11-24 2016-11-24 User hair model re-establishment method and apparatus, and terminal

Publications (2)

Publication Number Publication Date
CN108463823A CN108463823A (en) 2018-08-28
CN108463823B true CN108463823B (en) 2021-06-01

Family

ID=62194696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680060827.9A Active CN108463823B (en) 2016-11-24 2016-11-24 Reconstruction method and device of user hair model and terminal

Country Status (2)

Country Link
CN (1) CN108463823B (en)
WO (1) WO2018094653A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110910487B (en) * 2018-09-18 2023-07-25 Oppo广东移动通信有限公司 Construction method, construction device, electronic device, and computer-readable storage medium
CN109299323B (en) * 2018-09-30 2021-05-25 Oppo广东移动通信有限公司 Data processing method, terminal, server and computer storage medium
CN109408653B (en) 2018-09-30 2022-01-28 叠境数字科技(上海)有限公司 Human body hairstyle generation method based on multi-feature retrieval and deformation
CN109544445B (en) * 2018-12-11 2023-04-07 维沃移动通信有限公司 Image processing method and device and mobile terminal
CN111510769B (en) * 2020-05-21 2022-07-26 广州方硅信息技术有限公司 Video image processing method and device and electronic equipment
CN113763228B (en) * 2020-06-01 2024-03-19 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN111833240B (en) * 2020-06-03 2023-07-25 北京百度网讯科技有限公司 Face image conversion method and device, electronic equipment and storage medium
CN112862807B (en) * 2021-03-08 2024-06-25 网易(杭州)网络有限公司 Hair image-based data processing method and device
CN113269822B (en) * 2021-05-21 2022-04-01 山东大学 Person hair style portrait reconstruction method and system for 3D printing
CN113538455B (en) * 2021-06-15 2023-12-12 聚好看科技股份有限公司 Three-dimensional hairstyle matching method and electronic equipment
CN113962845B (en) * 2021-08-25 2023-08-29 北京百度网讯科技有限公司 Image processing method, image processing apparatus, electronic device, and storage medium

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354743A (en) * 2007-08-09 2009-01-28 湖北莲花山计算机视觉和信息科学研究院 Image base for human face image synthesis
CN101593365A (en) * 2009-06-19 2009-12-02 电子科技大学 A kind of method of adjustment of universal three-dimensional human face model
CN101630363B (en) * 2009-07-13 2011-11-23 中国船舶重工集团公司第七〇九研究所 Rapid detection method of face in color image under complex background
CN101923637B (en) * 2010-07-21 2016-03-16 康佳集团股份有限公司 A kind of mobile terminal and method for detecting human face thereof and device
CN102419868B (en) * 2010-09-28 2016-08-03 三星电子株式会社 Equipment and the method for 3D scalp electroacupuncture is carried out based on 3D hair template
CN102567998A (en) * 2012-01-06 2012-07-11 西安理工大学 Head-shoulder sequence image segmentation method based on double-pattern matching and edge thinning
CN103065360B (en) * 2013-01-16 2016-08-24 中国科学院重庆绿色智能技术研究院 A kind of hair shape effect map generalization method and system
CN103235931A (en) * 2013-03-29 2013-08-07 天津大学 Human eye fatigue detecting method
CN103400110B (en) * 2013-07-10 2016-11-23 上海交通大学 Abnormal face detecting method before ATM cash dispenser
CN103366400B (en) * 2013-07-24 2017-09-12 深圳市华创振新科技发展有限公司 A kind of three-dimensional head portrait automatic generation method
CN103905733B (en) * 2014-04-02 2018-01-23 哈尔滨工业大学深圳研究生院 A kind of method and system of monocular cam to real time face tracking
CN105279186A (en) * 2014-07-17 2016-01-27 腾讯科技(深圳)有限公司 Image processing method and system
CN104157001A (en) * 2014-08-08 2014-11-19 中科创达软件股份有限公司 Method and device for drawing head caricature
CN105069180A (en) * 2015-06-19 2015-11-18 上海卓易科技股份有限公司 Hair style design method and system
CN105139415A (en) * 2015-09-29 2015-12-09 小米科技有限责任公司 Foreground and background segmentation method and apparatus of image, and terminal
CN105389548A (en) * 2015-10-23 2016-03-09 南京邮电大学 Love and marriage evaluation system and method based on face recognition

Also Published As

Publication number Publication date
CN108463823A (en) 2018-08-28
WO2018094653A1 (en) 2018-05-31

Similar Documents

Publication Publication Date Title
CN108463823B (en) Reconstruction method and device of user hair model and terminal
CN108701216B (en) Face recognition method and device and intelligent terminal
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
KR102045695B1 (en) Facial image processing method and apparatus, and storage medium
EP3323249B1 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
US20180357819A1 (en) Method for generating a set of annotated images
US20210334998A1 (en) Image processing method, apparatus, device and medium for locating center of target object region
CN110852310B (en) Three-dimensional face recognition method and device, terminal equipment and computer readable medium
KR20200118076A (en) Biometric detection method and device, electronic device and storage medium
US20160092726A1 (en) Using gestures to train hand detection in ego-centric video
CN108876886B (en) Image processing method and device and computer equipment
CN113034354B (en) Image processing method and device, electronic equipment and readable storage medium
US20190197204A1 (en) Age modelling method
CN112633221A (en) Face direction detection method and related device
CN109410138B (en) Method, device and system for modifying double chin
CN110945537A (en) Training device, recognition device, training method, recognition method, and program
CN113469092A (en) Character recognition model generation method and device, computer equipment and storage medium
CN109166172B (en) Clothing model construction method and device, server and storage medium
CN113012030A (en) Image splicing method, device and equipment
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
JP6467817B2 (en) Image processing apparatus, image processing method, and program
CN114549598A (en) Face model reconstruction method and device, terminal equipment and storage medium
CN113920556A (en) Face anti-counterfeiting method and device, storage medium and electronic equipment
CN107992853B (en) Human eye detection method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210428

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Applicant after: Honor Device Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant