WO2020228389A1 - 一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质 - Google Patents

一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2020228389A1
WO2020228389A1 PCT/CN2020/076134 CN2020076134W WO2020228389A1 WO 2020228389 A1 WO2020228389 A1 WO 2020228389A1 CN 2020076134 W CN2020076134 W CN 2020076134W WO 2020228389 A1 WO2020228389 A1 WO 2020228389A1
Authority
WO
WIPO (PCT)
Prior art keywords
key point
face
face image
point feature
partial
Prior art date
Application number
PCT/CN2020/076134
Other languages
English (en)
French (fr)
Inventor
徐胜伟
王权
朴镜潭
钱晨
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2021516410A priority Critical patent/JP7191213B2/ja
Priority to KR1020217008646A priority patent/KR102523512B1/ko
Priority to SG11202103190VA priority patent/SG11202103190VA/en
Publication of WO2020228389A1 publication Critical patent/WO2020228389A1/zh
Priority to US17/212,523 priority patent/US11836943B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • This application relates to the field of three-dimensional modeling technology, and in particular to methods, devices and electronic equipment for creating face models.
  • pinch face refers to the creation of a three-dimensional face model of a virtual character.
  • the embodiments of the application provide a method, device and electronic device for creating a face model.
  • a method for creating a face model including: performing key point detection on a current face image to obtain at least one key point feature of the current face image; At least one key point feature is used to obtain target bone parameters matching the current face image; and based on the target bone parameters and a standard three-dimensional face model, a virtual three-dimensional face model corresponding to the current face image is created.
  • the method further includes: determining a reference model database according to a preset number of face image samples and the standard three-dimensional face model, wherein the reference model database includes a preset number It is assumed that at least one reference key point feature determined by a number of face image samples and a reference bone parameter corresponding to each of the at least one reference key point feature.
  • obtaining a target bone parameter that matches the current face image includes: according to the at least one key point feature, obtaining from the reference model database the same as the current person Target bone parameters for face image matching.
  • determining the reference model database according to the preset number of face image samples and the standard three-dimensional face model includes: obtaining the preset number of faces A face image sample set of image samples, where the face image sample set includes multiple image styles representing at least one partial face area; for each of the face image samples, it is created based on the standard three-dimensional face model A reference face model corresponding to the face image sample, where the reference face model includes the reference bone parameter corresponding to the face image sample; according to the reference face model corresponding to each face image sample To determine the reference model database.
  • the reference model database includes the corresponding relationship between the reference key point features and the reference bone parameters that characterize each of the image styles of each of the partial face regions.
  • creating the reference face model corresponding to the face image sample according to the standard three-dimensional face model includes: normalizing the face image sample to obtain A preprocessed face image conforming to the head posture and image size of the standard face image, wherein the standard face image is a two-dimensional face image corresponding to the standard three-dimensional face model; Performing key point detection on the face image to obtain a reference key point set of the face image sample, where the reference key point set includes a reference key point combination that characterizes each of the partial face regions on the face image sample; The corresponding bone parameters in the standard three-dimensional face model are adjusted based on each of the reference key point combinations, and the reference face model corresponding to the face image sample is created.
  • the performing key point detection on the current face image to obtain at least one key point feature of the current face image includes: keying the current face image Point detection to obtain the position coordinates of a preset number of key points; and determine a key point feature representing at least one partial face area on the current face image according to the position coordinates of the preset number of key points.
  • determining the key point feature representing at least one partial face region on the current face image according to the position coordinates of the preset number of key points includes: The position coordinates of a preset number of key points are determined, and a combination of key point coordinates characterizing the first partial face area on the current image is determined as the key point feature characterizing the first partial face area, wherein the first partial person The face area is any one of the at least one partial face area; and/or, according to the key point coordinate combination that characterizes the first partial face area, a feature characterizing the first partial face area is fitted The curve serves as the key point feature that characterizes the first partial face area.
  • obtaining target bone parameters matching the current face image from the reference model database includes: for the current face image Determine the reference key point feature matching the key point feature of the local face area in the reference model database as the target reference key point feature of the local face area; The reference bone parameter corresponding to the target reference key point feature of each of the partial face regions in the current face image determines the target bone parameter of the current face image.
  • determining the reference key point feature matching the key point feature of the partial face area in the reference model database as the target reference key point feature of the partial face area includes : Determine the similarity between the key point feature of the local face area and the corresponding reference key point feature in the reference model database; determine the reference key point feature with the highest similarity as the local face area The target reference key point feature.
  • determining the similarity between the key point feature of the local face area and the corresponding reference key point feature in the reference model database includes: according to the local face area The key point coordinate combination of, fits the characteristic curve that characterizes the local face area; according to the distance between the characteristic curve and the corresponding reference characteristic curve in the reference model database, the local face area is determined The similarity between the key point feature and the corresponding reference key point feature in the reference model database.
  • the key point feature of the partial face area is determined to be a face image sample in the reference model database
  • the similarity between the corresponding reference key point features in the local face region includes: for each of the sub-regions in the partial face region, determining the key point feature of the sub-region and the face image in the reference model database The similarity between the reference key point features of the corresponding sub-regions of the sample to obtain the local similarity corresponding to the sub-region; according to the local similarity corresponding to each of the sub-regions, it is determined that the local face region and The overall similarity between the corresponding partial face regions in the face image sample is used as the difference between the key point feature of the partial face region and the corresponding reference key point feature of the face image sample in the reference model database The similarity.
  • an apparatus for creating a face model including: a key point detection module, configured to perform key point detection on a current face image to obtain the current face At least one key point feature of the image; a parameter matching module for obtaining target bone parameters matching the current face image according to the at least one key point feature; a model creation module for obtaining target bone parameters according to the target bone parameters and standards
  • the three-dimensional face model creates a virtual three-dimensional face model corresponding to the current face image.
  • the device further includes a database creation module for determining the reference model database based on a preset number of face image samples and the standard three-dimensional face model, wherein
  • the reference model database includes at least one reference key point feature determined from a preset number of face image samples and reference bone parameters corresponding to each of the at least one reference key point feature.
  • the parameter matching module is specifically configured to obtain target bone parameters matching the current face image from the reference model database according to the at least one key feature.
  • the database creation module includes: a sample acquisition sub-module for acquiring a face image sample set containing the preset number of the face image samples, the person The face image sample set includes multiple image styles representing at least one partial face region; a reference model creation submodule is used to create the person according to the standard three-dimensional face model for each of the face image samples A reference face model corresponding to the face image sample, the reference face model including the reference bone parameters corresponding to the face image sample; With reference to the face model, the reference model database is determined.
  • the reference model database includes a correspondence relationship between the key point feature representing each of the image styles of each of the partial face regions and the reference bone parameters.
  • the reference model creation sub-module includes: an image preprocessing unit, which is used to normalize a sample of the face image to obtain the head of the standard face image The pre-processed face image whose posture and image size are consistent, wherein the standard face image is a two-dimensional face image corresponding to the standard three-dimensional face model; the key point detection unit is used to pre-process the human face Perform key point detection on the face image to obtain a reference key point set of the face image sample, where the reference key point set includes a reference key point combination that characterizes each of the partial face regions on the face image sample;
  • the model creation unit is configured to adjust the corresponding bone parameters in the standard three-dimensional face model based on each of the reference key point combinations to create the reference face model corresponding to the face image sample.
  • the key point detection module includes: a key point positioning sub-module, configured to perform key point detection on the current face image to obtain the position coordinates of a preset number of key points;
  • the point feature determination sub-module is configured to determine a key point feature representing at least one partial face area on the current face image according to the position coordinates of the preset number of key points.
  • the key point feature determination sub-module includes: a coordinate combination determining unit, configured to determine the first part of the current image based on the position coordinates of the preset number of key points The key point coordinate combination of the face area is used as the key point feature that characterizes the first partial face area, wherein the first partial face area is any one of the at least one partial face area; and/or , A characteristic curve determination unit, configured to fit a characteristic curve that characterizes the first partial face area as the key to characterize the first partial face area according to a combination of key point coordinates that characterize the first partial face area Point features.
  • the parameter matching module includes: a feature matching sub-module, configured to determine, for each partial face region in the current face image, the reference model database and the The reference key point feature matched with the key point feature of the local face area is used as the target reference key point feature of the local face area; the skeleton parameter determination sub-module is used to determine the sub-module according to the current face image.
  • the reference bone parameter corresponding to the target reference key point feature of the local face area is used to determine the target bone parameter of the current face image.
  • the feature matching sub-module includes: a similarity determination unit, configured to determine the key point feature of the partial face region and the corresponding reference key point in the reference model database Similarity between features; a target feature determining unit, configured to determine the reference key point feature with the highest similarity as the target reference key point feature of the local face area.
  • the similarity determination unit includes: a curve fitting subunit, configured to fit a combination of key point coordinates of the local face region to characterize the local face The feature curve of the region; the similarity determination subunit is used to determine the key point feature of the local face region and the reference according to the distance between the feature curve and the corresponding reference feature curve in the reference model database The similarity between the corresponding reference key point features in the model database.
  • the similarity determination unit includes: a local similarity determination subunit, configured to target the local face area when the local face area includes at least two sub-areas For each of the subregions in the face region, for each face image sample in the reference model database, determine the key point feature of the subregion and the corresponding subregion of the face image sample in the reference model database The similarity between the reference key point features of the region is used to obtain the local similarity corresponding to the sub-region; the overall similarity determination subunit is used for each face image sample in the reference model database, according to each The local similarity corresponding to the subregion is determined, and the overall similarity between the local face region and the corresponding local face region in the face image sample is determined as the key point feature of the local face region and The similarity between the corresponding reference key point features of the face image sample in the reference model database.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method according to any one of the first aspects is implemented .
  • an electronic device including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the program
  • the computer system automatically obtains target bone parameters corresponding to the face image based on the key point features that characterize the local face area, and automatically compares the standard three-dimensional human body parameters according to the target bone parameters.
  • the face model adjusts the bone parameters, and can automatically create a virtual three-dimensional face model that fits the current face image. In the entire model creation process, users do not need to manually adjust complex bone parameters according to their own subjective judgments, which reduces the difficulty of user operations.
  • the computer system may pre-configure the reference model database, and then quickly match the target bone parameters corresponding to the face image from the reference model database.
  • the regularity of the local area features of the face makes the data volume of the reference model database small, so that the computer system can quickly match the target bone parameters from the reference model database according to the key point features of the current face image, and then can use The target bone parameters efficiently and relatively accurately create a virtual three-dimensional face model matching the current face image.
  • Fig. 1 is a flowchart of a method for creating a face model according to an exemplary embodiment of the present application.
  • Fig. 2 is a schematic diagram showing an application scenario for creating a face model according to an exemplary embodiment of the present application.
  • Figures 3-1 and 3-2 are schematic diagrams of application scenarios for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 4 is a flowchart showing a method for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 5 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 6 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 7 is a schematic diagram showing an application scenario for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 8 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 9-1, Fig. 9-2 and Fig. 9-3 are schematic diagrams of application scenarios for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 10 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 11 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 12 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 13 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 14 is a block diagram showing an apparatus for creating a face model according to an exemplary embodiment of the present application.
  • Fig. 15 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 16 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 17 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 18 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 19 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 20 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 21 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 22 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 23 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
  • Fig. 24 is a schematic structural diagram of an electronic device according to another exemplary embodiment of the present application.
  • first, second, third, etc. may be used in this application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
  • word “if” as used herein can be interpreted as "when” or “when” or "in response to determination”.
  • the "virtual character” has evolved from a single virtual image to a character designed by the player, thereby allowing the creation of a more individual character image.
  • a method for creating a three-dimensional face model of a virtual character based on a virtual bone control method may include a computer system, and may also include a camera to collect facial images.
  • the above-mentioned computer system may be installed in a server, a server cluster or a cloud platform, or may be an electronic device such as a personal computer or a mobile terminal.
  • the above-mentioned mobile terminal may specifically be an electronic device such as a smart phone, a PDA (Personal Digital Assistant, personal digital assistant), a tablet computer, and a game console.
  • the camera and the computer system are independent and at the same time are connected to each other to jointly implement the method for creating a face model provided in the embodiment of the present application.
  • the method may include:
  • Step 110 Perform key point detection on the current face image to obtain at least one key point feature of the current face image.
  • each of the key point features can represent one or more partial face regions on the current face image.
  • the game application interface may provide a user operation entry.
  • the game player can input a face image through the user operation portal, expecting that the background program of the computer system can create a corresponding virtual three-dimensional face model based on the face image.
  • the computer system can create a virtual three-dimensional face model based on the face image input by the game player through the face pinching function, so as to meet the game player's individual needs for the game character.
  • the aforementioned current face image may be taken by a game player, or may be selected by the game player from a picture database.
  • the aforementioned current face image may be an image taken for a person in the real world, or a virtual person portrait designed manually or using drawing software.
  • the embodiment of the present application does not limit the current way of acquiring the face image and the real existence of the person in the image in the real world.
  • the computer system may first perform normalization processing on the current face image to obtain a face area image with a preset head posture and a preset image size.
  • a pre-trained neural network is used to perform processing such as face detection, face posture correction, image scaling, etc., to obtain a face image with a preset image size and conforming to the preset head posture.
  • the computer system can use any face key point detection method well known to those skilled in the relevant art to perform key point detection on the aforementioned preprocessed face region image to obtain key point features of the current face image.
  • the key point feature of the current face image may include the position coordinate information of the key point, and may also include a characteristic curve that is fitted according to the position coordinate information of multiple key points and represents a local face area, such as eyelids. Contour lines such as lines and lip lines.
  • Step 120 Obtain target bone parameters matching the current face image according to the key point feature.
  • step 120 may specifically include obtaining target bone parameters matching the current face image from a reference model database according to the key point feature.
  • the reference model database includes reference key point features determined from a preset number of face image samples and reference bone parameters corresponding to each of the reference key point features.
  • each part in view of the strong regularity of the facial features, each part can be characterized by a limited image pattern.
  • a limited number of eye shapes can express the eye characteristics of most people; a limited eyebrow style image can characterize the eyebrow characteristics of most people.
  • a limited eyebrow style image can characterize the eyebrow characteristics of most people.
  • the use of twelve eyebrow shapes can cover the eyebrow features of most faces.
  • the computer system may determine the reference model database according to a certain number of face image samples in advance.
  • the reference model database includes the reference key point feature determined from the face image sample and the reference bone parameter corresponding to each of the reference key point feature, and the reference bone parameter may be used to generate (render) the face image sample The reference face model.
  • the computer system After the computer system obtains the key point features of the current face image, it can find the reference key point feature most similar to the key point feature as the target reference key point feature, and then obtain the reference key point with the above target from the reference model database
  • the reference bone parameter corresponding to the feature is used as the target bone parameter adapted to the current face image.
  • the reference model database may include a reference bone parameter representing a reference face model used to generate a face image sample, and the correspondence between the reference key point feature acquired from the face image sample and the reference bone parameter .
  • the key point feature that characterizes the preset local face area can be used to match the reference model database, and the reference bone parameter corresponding to the key point feature can be obtained from it as the local face image.
  • the target bone parameters of the face area can be obtained, such as the target bone parameters of the eyes, mouth, eyebrows, nose, facial contour, etc., so as to obtain the target bone parameters that fit the current face image.
  • a set of target bone parameters can be obtained, such as the target bone parameters of the eyes, mouth, eyebrows, nose, facial contour, etc.
  • Step 130 Create a virtual three-dimensional face model corresponding to the current face image according to the target bone parameters and the standard three-dimensional face model.
  • the computer system can adjust the parameters of the bones in the standard 3D face model according to the above target bone parameters to generate a virtual 3D face model reflecting the facial features of the current face image .
  • the virtual three-dimensional face model may be a virtual three-dimensional face model close to the facial features of an actual person, or it may be a cartoonized virtual three-dimensional face model reflecting the demeanor of the person.
  • the embodiments of the present application do not limit that the finally output three-dimensional face model must be close to the facial features of real-world characters.
  • FIG. 3-1 for a schematic diagram of a standard three-dimensional face model according to an exemplary embodiment.
  • the standard three-dimensional face model belongs to a cartoonized virtual three-dimensional face model.
  • Fig. 3-2 shows a skeleton diagram of the above-mentioned standard three-dimensional face model.
  • the entire model consists of a preset number of bone structures, such as 61 bones.
  • the line between every two points in Figure 3-2 represents a bone.
  • Each part involves one or more bones.
  • the nose part involves 3 bones. By adjusting the parameters of the 3 bones, different types of 3D nose models can be generated.
  • the computer system automatically obtains the target bone parameters corresponding to the face image based on the key point features that characterize the local face area, and automatically adjusts the bone parameters of the standard three-dimensional face model according to the target bone parameters.
  • users do not need to manually adjust complex bone parameters according to their own subjective judgments, which reduces the difficulty of user operations.
  • the computer system may pre-configure the reference model database, and then quickly match the target bone parameters corresponding to the face image from the reference model database.
  • the regularity of the local area features of the face makes the data volume of the reference model database not large, so that the computer system can quickly match the target bone parameters from the reference model database according to the key point features of the current face image, and then can use this
  • the target bone parameters efficiently and relatively accurately create a virtual three-dimensional face model matching the current face image, which has good generalization.
  • the above method further includes creating a reference model database.
  • the method for creating a face model may further include step 100 to determine a reference model database according to a preset number of face image samples and a standard three-dimensional face model.
  • the reference model database includes at least one reference key point feature determined from a preset number of face image samples and reference bone parameters corresponding to each of the at least one reference key point feature.
  • step 120 may include: obtaining target bone parameters matching the current face image from the reference model library database according to the at least one key point feature.
  • a preset number of face image samples can be obtained, and the image styles of various parts in the aforementioned face image samples can be manually marked. Then, based on the aforementioned annotated face image sample and standard three-dimensional face model, a corresponding virtual three-dimensional face model is generated through a bone control method.
  • the virtual three-dimensional face model generated based on the face image sample and the standard three-dimensional face model is referred to as the reference face model.
  • the computer system will correspondingly generate 201 reference face models, and generate the aforementioned reference model database based on the relevant data of the 201 reference face models.
  • the execution body of creating the reference model database and the execution body of subsequently applying the reference model database to create the face model need not be the same computer system.
  • the execution body of creating the reference model database may be a cloud computer system, such as a cloud server, and the execution body of the above steps 110 to 130 may be a computer system as a terminal device.
  • the determination process of the reference model database and the subsequent face model creation process can be both The computer system of the terminal equipment executes.
  • step 100 may include:
  • Step 101 Obtain a face image sample set containing a preset number of face image samples.
  • the face image sample set includes multiple image patterns that characterize at least one partial face area.
  • the face image sample set may include a certain number of face image samples.
  • the above-mentioned certain number of face image samples contains as comprehensively as possible the different image styles of various face parts such as forehead, eyes, nose, lips, etc., to ensure that the corresponding generated reference model database includes as much as possible Comprehensive reference data, such as reference key point features, reference bone parameters, etc.
  • the following conditions can be met: for a randomly collected two-dimensional face image A, from the image patterns of each face part contained in the above face image sample set, you can Find the image styles corresponding to the different partial face regions in the above image A; in other words, according to the selective extraction of the image styles of different partial face regions such as facial features from the above-mentioned certain number of face image samples, you can roughly piece together Image A similar face image.
  • a certain number of face image samples can be collected according to common facial features in the real world to obtain a face image sample set.
  • 201 face image samples are collected to determine the above-mentioned face image sample set.
  • the 201 face image samples may contain multiple image patterns for each partial face area.
  • the partial face area refers to the eyebrow area, eye area, nose area, facial contour and other areas recognized from the two-dimensional face image.
  • the 201 face image sample includes 12 eyebrow shapes corresponding to the eyebrow region shown in FIG. 2 above.
  • the 201 facial image samples include multiple image patterns corresponding to partial facial regions such as mouth, eyes, nose, and facial contours.
  • Step 102 Create a reference three-dimensional face model corresponding to each face image sample according to the standard three-dimensional face model.
  • the virtual three-dimensional face model created for each face image sample is referred to as a reference three-dimensional face model.
  • Each reference 3D face model corresponds to a set of bone control parameters.
  • the above step 102 may include:
  • Step 1021 Perform normalization processing on the face image sample to obtain a preprocessed face image that conforms to the head posture and image size of the standard face image.
  • the standard face image is a two-dimensional face image corresponding to the standard three-dimensional face model.
  • standardized processing such as face area detection, head posture correction, image scaling, etc. can be performed for each face image sample to obtain the head posture and image size of the standard face image.
  • Compliant preprocessing face images Compared with the standard face image, the pre-processed face image can be understood as a face image separately collected by the same camera using the same image acquisition parameters for two people with the same object distance and the same head posture.
  • the standard face image can be understood as the projection image of the standard three-dimensional face model in the preset image coordinate system.
  • the standard three-dimensional face model is a virtual model created by the computer system after the key point detection of the standard face image, based on the obtained key point collection, such as 240 key point position coordinates, and a preset number of bones, such as 61 bones.
  • Three-dimensional face model is a virtual model created by the computer system after the key point detection of the standard face image, based on the obtained key point collection, such as 240 key point position coordinates, and a preset number of bones, such as 61 bones.
  • Step 1022 Perform key point detection on the preprocessed face image to obtain a reference key point set of the face image sample.
  • the reference key point set includes references that characterize each partial face area on the face image sample Key point combination.
  • any key point detection method well known to those skilled in the art can be used to extract a preset number of key points from the preprocessed face image, such as 68 key points, 106 key points, or 240 key points.
  • key point detection In the process of key point detection on the face region image, preset algorithms such as edge detection robert algorithm, Sobel Sobel algorithm, etc. can be used; key point detection can also be performed through related models such as active contour snake model.
  • the key point location can be performed by a neural network used for face key point detection. It is also possible to perform face key point detection through third-party applications.
  • the third-party toolkit Dlib is used to perform face key point location, and 68 face key points are detected, as shown in Figure 7.
  • 240 face key point positioning technology can also be used to locate the position coordinates of 240 key points, so as to realize eyebrows, eyes, nose, lips, facial contours, facial expressions in the current face image and/or face image sample. Positioning of detailed features such as key parts.
  • the sequence number of each reference key point can be determined according to preset rules, and the reference key point combination that characterizes each partial face area can be determined. For example, in the example shown in Fig. 7, 68 key points are extracted from the face image sample; the reference key point combination composed of 18 to 22 reference key points represents the left eyebrow area. By analogy, different key point combinations are used to characterize different partial face regions.
  • the information of each key point includes the serial number and coordinate position.
  • the number and quantity of key points representing the same partial face area are the same, but the coordinate positions of the key points are different.
  • the combination of key points 18-22 extracted from the standard face image also represents the left eyebrow area in the standard face image, but the coordinate position of each key point is the same as the key point 18-22 in the example shown in Figure 7. The coordinate positions of the points are different.
  • the coordinate position of the key point refers to the position of the key point in the XOY coordinate system shown in FIG. 7 in the preset image coordinate system. Since the size of the pre-processed face images is the same, the same image coordinate system can be used for each pre-processed face image to represent the position coordinates of key points in different pre-processed face images to facilitate subsequent distance calculations.
  • Step 1023 Adjust the corresponding bone parameters in the standard three-dimensional face model based on each of the reference key point combinations, and create a reference face model corresponding to the face image sample.
  • the reference face model corresponding to the face image sample includes the reference bone parameters corresponding to the face image sample.
  • the reference bone parameter may represent the reference face model used to render the face image sample.
  • the system presets a mapping relationship between key point combinations and bones, and the mapping relationship may indicate which bone parameters need to be adjusted when generating the local face region represented by the key point combination in the corresponding three-dimensional face model .
  • the nose region in the standard three-dimensional face model involves three bones, which can be expressed as G1 to G3, it can be determined that the three-dimensional nose model and the face image sample generated by adjusting the parameters of the three bones When the shape of the nose is approaching, the three-dimensional model of the nose is determined to be created.
  • the bone control parameters of the current three bones are reference bone parameters corresponding to the image style of the nose in the face image sample.
  • the creation of the reference face model is completed when the generated virtual three-dimensional face model meets the expectations of the user by adjusting the bone parameters of each partial face area.
  • the reference bone parameters corresponding to each reference key point combination in the current face image sample can be determined, that is, the reference bone parameters corresponding to the image style of each partial face region in the face image sample, and the current face is obtained.
  • the reference face model data may include the correspondence between the reference key point combination of each partial face region and the reference bone parameters.
  • a virtual three-dimensional face model that is, a reference face model
  • a reference face model for a face image sample
  • it can be obtained according to the correspondence between the reference key point combinations that characterize each local face region and the reference bone parameters
  • the reference face model data corresponding to a face image sample.
  • the above steps 1021 to 1023 describe the process of creating a corresponding reference face model based on a face image sample.
  • Step 103 Determine a reference model database according to the reference face model corresponding to each face image sample.
  • the reference model database includes the correspondence between the reference key point features and the reference bone parameters that characterize each image style of each partial face region.
  • the reference face model corresponding to each face image sample can be created according to the method shown in FIG. 6, and then the reference face model data corresponding to each face image sample can be determined.
  • the reference model database may include the correspondence between the reference key point combination representing the image style of each partial face region and the reference bone parameters, the reference key point feature data of each face image sample, and each reference person Reference bone parameters of the face model.
  • the bones have a parent-child bone relationship. When the parent bone moves, it will drive the child bones to move, and the bone movement similar to the wrist will drive the bone movement of the palm.
  • the bone parameter adjustment of a partial face region may be related to the adjustment parameters of other bones in the entire face model. Therefore, in the embodiment of the present application, in the reference model database, a set of reference bone parameters corresponding to the entire reference face model is used for data storage.
  • the computer system detects the key points of the current input face image, and after obtaining the key point features, it will automatically retrieve the reference model database according to the key point features of the current face image, and match different parts from the reference model database.
  • the target bone parameters of the face area are the target bone parameters of the face area.
  • the foregoing step 110 may include:
  • Step 1101 Perform key point detection on the current face image to obtain position coordinates of a preset number of key points.
  • the computer system can perform normalization processing on the current face image, including face area detection, head posture correction, image scaling, etc., to obtain the same preprocessing as the standard face image size image.
  • the face key point positioning technology can be used to detect key points on the preprocessed image.
  • 240 face key point positioning technology can be used to perform key point detection on the preprocessed image to obtain the position coordinates of 240 face key points.
  • Step 1102 Determine a key point feature representing at least one partial face region on the current face image according to the position coordinates of the preset number of key points.
  • the key point features representing at least one partial face area in the current face image can be determined.
  • a partial face area such as the eyebrow area
  • its key point features can include at least two representations as follows:
  • the first way is to use the position coordinate combination of the key points to express the key point features of the local face area.
  • the key point coordinates that characterize a local face area can be combined as the key point feature of the local face area.
  • the coordinate position combination of the key points with serial numbers 18-22 is determined as the key point feature of the left eyebrow area.
  • relatively fixed key points including the number of key points and the serial number of each key point
  • the coordinate positions of the key points of the same serial number in the image coordinate system are different.
  • the coordinate position of the key point 18 is (80, 15), that is, the position of the pixel in the 80th row and the 15th column.
  • the coordinate position of the key point 18 may be (100, 20), that is, the position of the pixel in the 100th row and 20th column. Therefore, the position coordinates of the key point combination can be used to effectively distinguish the facial features of different people.
  • the second method is to use the fitting curve of the key point coordinate combination to express the key point features of the local face area.
  • a feature curve that characterizes the local face area can be fitted according to a combination of key point coordinates that characterize a local face area, and the feature curve can be used as the key point feature of the local face area.
  • the characteristic curve fitted according to the coordinate positions of the key points with serial numbers 18-22 is used as the key point feature of the left eyebrow area.
  • the eyelid characteristic curve is fitted according to the position coordinates of the key points 1-12 of the eye as the key point characteristic of the left eye.
  • the shape of the curve fitted according to the position coordinates of the key points is also different. Therefore, the aforementioned characteristic curve can be used as a key point feature that characterizes a partial face region in the current face image to distinguish faces of different people.
  • the target bone parameters matching the current face image can be searched from the reference model database based on the similarity between the key point features.
  • the foregoing step 120 may include:
  • Step 121 For each partial face area in the current face image, determine a reference key point feature matching the key point feature of the partial face area in the reference model database as the target reference key point feature of the partial face area.
  • the above step 121 may include:
  • Step 1211 Determine the similarity between the key point feature of the local face area and the corresponding reference key point feature in the reference model database.
  • the corresponding reference key point feature in the reference model database may be a reference key point feature corresponding to the position of the partial face region in the reference model database.
  • the Euclidean distance between the key point coordinate combinations can be used to determine the difference between the key point feature of the local face area and the reference key point feature The similarity between.
  • the Euclidean distances between the position coordinates of the key points 18-22 in the current face image and the position coordinates of the key points 18-22 in any face image sample can be calculated respectively, denoted as l 18 , l 19 , l 20 , l 21 , l 22 , where l 18 represents the Euclidean distance between the position coordinates of the key point 18 of the current face image and the position coordinates of the key point 18 in the face image sample, and so on analogy.
  • the similarity of the left eyebrow region in the two images can represent the sum L of the Euclidean distance of key points 18-22.
  • L can be expressed as:
  • L l 18 +l 19 +l 20 +l 21 +l 22 .
  • the aforementioned similarity can also be expressed as a weighted value of the Euclidean distance between key points.
  • preset weights can be set for each key point according to actual application scenarios. For example, the weights set for key points 18-22 are ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 , ⁇ 5 , then L can Expressed as:
  • L ⁇ 1 *l 18 + ⁇ 2 *l 19 + ⁇ 3 *l 20 + ⁇ 4 *l 21 + ⁇ 5 *l 22 .
  • the foregoing step 1211 may include:
  • Step 12111 According to the combination of key point coordinates of the local face area, a characteristic curve characterizing the local face area is fitted.
  • a characteristic curve can be fitted according to preset rules, such as from left to right, such as As shown in Figure 9-2.
  • Step 12112 Determine the similarity between the key point feature of the partial face area and the corresponding reference key point feature in the reference model database according to the distance between the feature curve and the corresponding reference feature curve in the reference model database.
  • the Frechet distance value can be used to measure the similarity between key point features.
  • a combination of the two can also be used to determine the target reference key point feature.
  • the Euclidean distance between the key point coordinate combination and each corresponding reference key point coordinate combination in the reference model database can be calculated respectively. If there are at least two reference key point coordinate combinations in the reference model database that are the same as the Euclidean distance value of the key point coordinate combination, further calculating each of the reference key points in the at least two reference key point coordinate combinations Combining the frechet distance value between the key point coordinate combination and the key point coordinate combination, thereby effectively identifying the target reference key point feature that is closest to the characteristic curve shape of the key point coordinate combination in the current face image.
  • a corresponding strategy may be used to determine the similarity between the key point feature of the local face area and the corresponding reference key point feature in the reference model database according to the distribution characteristics of the local face area. degree.
  • the partial face area includes at least two sub-regions
  • the partial face area can be characterized by the key point features of the at least two sub-regions, referring to FIG. 13, in determining the key point features of the partial face area
  • the above step 1211 may include:
  • Step 1211-1 For each sub-region in the partial face region, determine the similarity between the key point feature of the sub-region and the reference key point feature of the corresponding sub-region of the face image sample in the reference model database. Obtain the local similarity corresponding to the sub-region.
  • the corresponding sub-region of the face image sample refers to the sub-region in the face image sample corresponding to the position of the sub-region currently being processed in the partial face region.
  • the key point features of the eye area include key point features corresponding to the left eye area and the right eye area respectively.
  • the key point features of the eyebrow area include the key point features corresponding to the left eyebrow area and the right eyebrow area respectively.
  • the key point features of the mouth area include the key point features corresponding to the upper lip area and the lower lip area respectively.
  • the similarity between the key point feature of the left eye area and each left eye reference key point feature in the reference model database according to any of the above determination methods, and the key point feature of the right eye area and the reference model database The similarity between each right eye reference key point feature in the middle.
  • the similarity between the key point feature of a sub-region and the reference key point feature of the corresponding sub-region in the reference model database is called the local similarity.
  • Step 1211-2 Determine the overall similarity between the local face area and the corresponding local face area of the face image sample according to the local similarity corresponding to each sub-region, as the key point of the local face area The similarity between the feature and the corresponding reference key point feature of the face image sample in the reference model database.
  • the two local similarities can be summed or weighted summation , As the similarity between the current face image and the corresponding area of a face image sample.
  • the overall similarity of the above-mentioned eyes, eyebrows, mouth, etc., of the partial face area can be more accurately compared based on the local similarity of the multiple sub-regions, and further
  • the target bone parameters of the partial face region are accurately determined from the reference model database.
  • Step 1212 Determine the reference key point feature with the highest similarity as the target reference key point feature of the partial face area.
  • step 1211 calculate the similarity between the key point feature of the local face area in the current face image and the corresponding reference key point feature of each face image sample in the reference model database, and The reference key point feature with the highest similarity is determined as the target reference key point feature of the partial face region in the current face image.
  • Step 122 Determine the target bone parameter of the current face image according to the reference bone parameter corresponding to the target reference key point feature of each partial face region in the current face image.
  • the reference model database stores the reference key point features corresponding to each image style of each partial face area, and the corresponding relationship between the reference key point features and the reference bone parameters.
  • M partial face regions can be obtained from each face image division, for example, 5 partial face regions, which are eyebrows, eyes, nose, mouth, and face contours.
  • Each reference key point feature corresponds to a set of reference bone parameters. Therefore, at least N groups of reference bone parameters corresponding to the N reference key point features are stored in the reference model database.
  • the aforementioned reference key point feature is a reference feature curve fitted by the coordinate position of the reference key point
  • at least N groups of reference bone parameters corresponding to the N reference feature curves are stored in the reference model database.
  • the aforementioned reference key point feature is the coordinate position combination of the reference key point, that is, the reference key point coordinate combination
  • at least N sets of reference bone parameters corresponding to the N reference key point coordinate combinations are stored in the reference model database.
  • the computer system After the computer system obtains the key point features of the local face area from the current face image, it can determine that the reference model database matches the key point feature, such as the most similar reference key point feature as the target reference key point feature, and then according to the target Refer to the reference bone parameter corresponding to the key point feature to determine the target bone parameter suitable for the above key point feature.
  • the reference bone parameter corresponding to the target reference key point feature is directly determined as the target bone parameter applicable to the local face region corresponding to the key point feature in the current face image.
  • the Euclidean distance or the frechet distance is used to measure the similarity between the key point feature of the local face region in the current face image and the corresponding reference key point feature in the reference model database by combining the expression mode of the key point feature It can relatively accurately and quickly determine the target bone parameters of the partial face area in the current face image based on the reference model database, and determine the bone adjustment parameters of the face model to be created, which can effectively improve the preset application scenes such as game scenes
  • the present disclosure also provides embodiments of application function realization devices and corresponding terminals.
  • an embodiment of the present application provides an apparatus for creating a face model.
  • the apparatus may include: a key point detection module 21, configured to perform key point detection on the current face image to obtain the current At least one key point feature of the face image; the parameter matching module 22 is used to obtain the target bone parameters matching the current face image according to the at least one key point feature; the model creation module 23 is used to obtain the target bone parameters according to the target The bone parameters and the standard three-dimensional face model are used to create a virtual three-dimensional face model corresponding to the current face image.
  • the virtual three-dimensional face model corresponding to the current face image output by the model creation module 23 may be a cartoonized virtual three-dimensional face model corresponding to the current face image.
  • the virtual three-dimensional face model corresponding to the current face image output by the model creation module 23 may also be a virtual three-dimensional face model similar to the actual face in the current face image.
  • the face model is a virtual three-dimensional face model that is realistic with the real face.
  • the device may further include: a database creation module 20, configured to determine all face image samples based on a preset number of face image samples and the standard three-dimensional face model.
  • the reference model database includes at least one reference key point feature determined from a preset number of face image samples and reference bone parameters corresponding to each of the at least one reference key point feature.
  • the parameter matching module 22 is specifically configured to obtain target bone parameters matching the current face image from the reference model database according to the at least one key point feature.
  • the database creation module 20 may include: a sample acquisition sub-module 201, configured to acquire a person containing the preset number of face image samples A face image sample set, the face image sample set includes multiple image styles representing at least one partial face region; a reference model creation sub-module 202 is used for each face image sample according to the The standard three-dimensional face model creates a reference face model corresponding to the face image sample, and the reference face model includes the reference bone parameters corresponding to the face image sample; the database determination sub-module 203 is used for The reference face model corresponding to each of the face image samples is determined to determine the reference model database.
  • the reference model database includes a correspondence relationship between the key point feature representing each of the image styles of each of the partial face regions and the reference bone parameters.
  • the reference model creation sub-module 202 may include: an image preprocessing unit 2021, configured to perform normalization processing on a face image sample to obtain A preprocessed face image conforming to the head posture and image size of a standard face image, wherein the standard face image is a two-dimensional face image corresponding to the standard three-dimensional face model; a key point detection unit 2022 , For performing key point detection on the preprocessed face image to obtain a reference key point set of the face image sample, the reference key point set including characterizing each of the local people on the face image sample Reference key point combination of the face region; a reference model creation unit 2023, configured to adjust the corresponding bone parameters in the standard three-dimensional face model based on each of the reference key point combinations, and create the corresponding face image sample The reference face model.
  • an image preprocessing unit 2021 configured to perform normalization processing on a face image sample to obtain A preprocessed face image conforming to the head posture and image size of a standard face image, wherein the standard face image is a two
  • the key point detection module 21 may include: a key point positioning sub-module 211 for keying the current face image Point detection to obtain the position coordinates of a preset number of key points; the key point feature determination sub-module 212 is configured to determine at least one partial face area on the current face image according to the position coordinates of the preset number of key points The key point features.
  • the key point feature may include a combination of key point coordinates, and/or a characteristic curve.
  • the key point feature determining sub-module 212 may include: a coordinate combination determining unit 2121, configured to determine the position of the key point based on the preset number Coordinates, determining a combination of key point coordinates characterizing the first partial face area on the current image as the key point feature characterizing the first partial face area, wherein the first partial face area is located at the at least one Any one of the local face regions; the characteristic curve determining unit 2122 is configured to fit a characteristic curve that characterizes the first local face region according to the combination of key point coordinates that characterize the first local face region as the first local face region.
  • the key point feature of a partial face area configured to determine the position of the key point based on the preset number Coordinates, determining a combination of key point coordinates characterizing the first partial face area on the current image as the key point feature characterizing the first partial face area, wherein the first partial face area is located at the at least one Any one of the local face regions.
  • the characteristic curve determining unit 2122 is configured to fit a
  • the at least one partial face region includes at least one of the following: eyebrows, eyes, nose, mouth, and facial contours.
  • the device embodiment shown in FIG. 19 corresponds to the case where the key point feature determination submodule 212 includes a coordinate combination determination unit 2121 and a characteristic curve determination unit 2122.
  • the key point feature determining sub-module 212 may include a coordinate combination determining unit 2121 or a characteristic curve determining unit 2122.
  • the parameter matching module 22 may include: a feature matching sub-module 221, configured to target each part of the current face image The face area, determining the reference key point feature matching the key point feature of the local face area in the reference model database, as the target reference key point feature of the local face area; a skeleton parameter determination submodule 222, It is used to determine the target bone parameter of the current face image according to the reference bone parameter corresponding to the target reference key point feature of each of the partial face regions in the current face image.
  • the feature matching submodule 221 may include: a similarity determination unit 2211, configured to determine the key point feature of the partial face region and the reference The similarity between the corresponding reference key point features in the model database; the target feature determining unit 2212 is configured to determine the reference key point feature with the highest similarity as the target reference key point feature of the local face area .
  • the key point feature may be a characteristic curve fitted according to the position coordinates of the key point.
  • the similarity determination unit 2211 may include: a curve fitting subunit 2201, configured to combine the key point coordinates of a local face area , Fitting a characteristic curve that characterizes the partial face region; a similarity determination subunit 2202, configured to determine the partial facial area according to the distance between the characteristic curve and the corresponding reference characteristic curve in the reference model database The similarity between the key point feature of the face area and the corresponding reference key point feature in the reference model database.
  • the distance may include Euclidean distance or Frechet distance.
  • the target feature determining unit 2212 may be used to determine the reference characteristic curve with the smallest distance value as the target reference key point feature.
  • the similarity determination unit 2211 may include: a local similarity determination subunit 22111, configured to include at least two subregions in one partial face region In the case of, for each of the sub-regions in the partial face region, for each face image sample in the reference model database, determine the key point feature of the sub-region and the reference model database The similarity between the reference key point features of the corresponding sub-regions of the face image sample is used to obtain the local similarity corresponding to the sub-region; the overall similarity determination sub-unit 22112 is used for each reference model database.
  • For a personal face image sample determine the overall similarity between the partial face area and the corresponding partial face area in the face image sample according to the local similarity corresponding to each of the sub-regions, as the The similarity between the key point feature of the local face area and the corresponding reference key point feature of the face image sample in the reference model database.
  • the relevant part can refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
  • Some or all of the modules can be selected according to actual needs to achieve the objectives of the solution of this application. Those of ordinary skill in the art can understand and implement it without creative work.
  • the embodiment of the present application also proposes a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
  • the electronic device includes a processor 241, an internal bus 242, a network interface 243, a memory 245, and a non-volatile memory 246.
  • the processor 241 reads the corresponding computer program from the non-volatile memory 246 to the memory 245 and then runs it to form an intelligent driving control device on a logical level.
  • this application does not exclude other implementations, such as logic devices or a combination of software and hardware, etc. That is to say, the execution body of the following processing flow is not limited to each logic unit, and can also be Hardware or logic device.
  • one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more embodiments of this specification may adopt a computer program implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. The form of the product.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the embodiment of this specification also provides a computer-readable storage medium, and the storage medium can store a computer program.
  • the program When the program is executed by a processor, it realizes the creation of the face model provided by any one of the embodiments in FIGS. 1 to 13 of this specification. Steps of the method.
  • the embodiments of the subject and functional operations described in this specification can be implemented in the following: digital electronic circuits, tangible computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or among them A combination of one or more.
  • the embodiments of the subject matter described in this specification can be implemented as one or more computer programs, that is, one or one of the computer program instructions encoded on a tangible non-transitory program carrier to be executed by a data processing device or to control the operation of the data processing device Multiple modules.
  • the program instructions may be encoded on artificially generated propagated signals, such as machine-generated electrical, optical or electromagnetic signals, which are generated to encode information and transmit it to a suitable receiver device for data
  • the processing device executes.
  • the computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • the processing and logic flow described in this specification can be executed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating according to input data and generating output.
  • the processing and logic flow can also be executed by a dedicated logic circuit, such as FPGA (Field Programmable Gate Array) or ASIC (Application Specific Integrated Circuit), and the device can also be implemented as a dedicated logic circuit.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • Computers suitable for executing computer programs include, for example, general-purpose and/or special-purpose microprocessors, or any other type of central processing unit.
  • the central processing unit will receive instructions and data from a read-only memory and/or random access memory.
  • the basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include one or more mass storage devices for storing data, such as magnetic disks, magneto-optical disks, or optical disks, or the computer will be operatively coupled to this mass storage device to receive data from or send data to it. It transmits data, or both.
  • the computer does not have to have such equipment.
  • the computer can be embedded in another device, such as a mobile phone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or a universal serial bus (USB ) Portable storage devices with flash drives, to name a few.
  • PDA personal digital assistant
  • GPS global positioning system
  • USB universal serial bus
  • Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media, and memory devices, including, for example, semiconductor memory devices (such as EPROM, EEPROM, and flash memory devices), magnetic disks (such as internal hard disks or Removable disk), magneto-optical disk, CD ROM and DVD-ROM disk.
  • semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
  • magnetic disks such as internal hard disks or Removable disk
  • magneto-optical disk CD ROM and DVD-ROM disk.
  • the processor and the memory can be supplemented by or incorporated into a dedicated logic circuit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Graphics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Image Analysis (AREA)

Abstract

本申请提供一种创建脸部模型的方法、装置及电子设备。其中,所述方法包括:对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征;根据所述至少一个关键点特征,获取与所述当前人脸图像匹配的目标骨骼参数;依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。

Description

创建脸部模型 技术领域
本申请涉及三维建模技术领域,特别涉及创建脸部模型的方法、装置及电子设备。
背景技术
随着移动终端和计算机技术的发展,游戏、虚拟社交等应用程序的用户逐渐增多。在游戏和虚拟社交应用中,人们日益追求对虚拟角色的个性化设计,由此产生了对捏脸实现的极大需求。所谓捏脸,是指创建虚拟角色的三维脸部模型。目前,如何提高捏脸效率及准确性是本领域技术人员正在研究的技术。
发明内容
本申请实施例提供一种创建脸部模型的方法、装置及电子设备。
根据本申请实施例的第一方面,提供了一种创建脸部模型的方法,包括:对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征;根据所述至少一个关键点特征,获取与所述当前人脸图像匹配的目标骨骼参数;依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。
结合本申请提供的任一方法实施例,所述方法还包括:根据预设数量的人脸图像样本和所述标准三维人脸模型,确定参考模型数据库,其中,所述参考模型数据库包括从预设数量的人脸图像样本确定的至少一个参考关键点特征以及所述至少一个参考关键点特征各自对应的参考骨骼参数。相应地,根据所述至少一个关键点特征,获取与所述当前人脸图像匹配的目标骨骼参数,包括:根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数。
结合本申请提供的任一方法实施例,根据所述预设数量的人脸图像样本和所述标准三维人脸模型,确定所述参考模型数据库,包括:获取含有所述预设数量的人脸图像样本的人脸图像样本集合,所述人脸图像样本集合中包括表征至少一个局部人脸区域的多种图像样式;针对每个所述人脸图像样本,依据所述标准三维人脸模型创建所述人脸图像样本对应的参考人脸模型,所述参考人脸模型包括与所述人脸图像样本对应的所述参考骨骼参数;根据每个所述人脸图像样本对应的参考人脸模型,确定所述参考模型数据库。其中,所述参考模型数据库包括表征每一个所述局部人脸区域的每一种所述图像样式的所述参考关键点特征与所述参考骨骼参数之间的对应关系。
结合本申请提供的任一方法实施例,依据所述标准三维人脸模型,创建所述人脸图像样本对应的所述参考人脸模型,包括:对所述人脸图像样本进行规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像,其中,所述标准人脸图像为与所述标准三维人脸模型对应的二维人脸图像;对所述预处理人脸图像进行关键点检测,获得所述人脸图像样本的参考关键点集合,所述参考关键点集合包括表征所述人脸图像样本上的各所述局部人脸区域的参考关键点组合;基于每个所述参考关键点组合对 所述标准三维人脸模型中对应的骨骼参数进行调整,创建所述人脸图像样本对应的所述参考人脸模型。
结合本申请提供的任一方法实施例,所述对所述当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征,包括:对所述当前人脸图像进行关键点检测,获得预设数量关键点的位置坐标;根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的关键点特征。
结合本申请提供的任一方法实施例,根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的所述关键点特征,包括:基于所述预设数量关键点的位置坐标,确定表征所述当前图像上第一局部人脸区域的关键点坐标组合作为表征该第一局部人脸区域的所述关键点特征,其中,该第一局部人脸区域为所述至少一个局部人脸区域中的任一个;和/或,根据表征该第一局部人脸区域的所述关键点坐标组合,拟合出表征该第一局部人脸区域的特征曲线作为表征该第一局部人脸区域的所述关键点特征。
结合本申请提供的任一方法实施例,根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数,包括:针对所述当前人脸图像中每个局部人脸区域,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征作为所述局部人脸区域的目标参考关键点特征;依据与所述当前人脸图像中每个所述局部人脸区域的目标参考关键点特征对应的参考骨骼参数,确定所述当前人脸图像的目标骨骼参数。
结合本申请提供的任一方法实施例,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征作为所述局部人脸区域的目标参考关键点特征,包括:确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度;将相似度最高的所述参考关键点特征确定为所述局部人脸区域的所述目标参考关键点特征。
结合本申请提供的任一方法实施例,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度,包括:根据所述局部人脸区域的关键点坐标组合,拟合出表征所述局部人脸区域的特征曲线;根据所述特征曲线与所述参考模型数据库中对应的参考特征曲线之间的距离,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度。
结合本申请提供的任一方法实施例,在所述局部人脸区域包括至少两个子区域的情况下,确定所述局部人脸区域的关键点特征与所述参考模型数据库中一个人脸图像样本的对应参考关键点特征之间的相似度,包括:针对所述局部人脸区域中的每个所述子区域,确定所述子区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应子区域的参考关键点特征之间的相似度,获得所述子区域对应的局部相似度;根据每个所述子区域对应的所述局部相似度,确定所述局部人脸区域与该人脸图像样本中的对应局部人脸区域之间的整体相似度,作为所述局部人脸区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应参考关键点特征之间的相似度。
根据本申请实施例的第二方面,提供了了一种创建脸部模型的装置,所述装置包括: 关键点检测模块,用于对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征;参数匹配模块,用于根据所述至少一个关键点特征获取与所述当前人脸图像匹配的目标骨骼参数;模型创建模块,用于依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。
结合本申请提供的任一装置实施例,所述装置还包括数据库创建模块,用于根据预设数量的人脸图像样本和所述标准三维人脸模型,确定所述参考模型数据库,其中,所述参考模型数据库包括从预设数量的人脸图像样本确定的至少一个参考关键点特征以及所述至少一个参考关键点特征各自对应的参考骨骼参数。在这种情况下,所述参数匹配模块具体用于:根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数。
结合本申请提供的任一装置实施例,所述数据库创建模块,包括:样本获取子模块,用于获取含有所述预设数量的所述人脸图像样本的人脸图像样本集合,所述人脸图像样本集合中包括表征至少一种局部人脸区域的多种图像样式;参考模型创建子模块,用于针对每个所述人脸图像样本,依据所述标准三维人脸模型创建所述人脸图像样本对应的参考人脸模型,所述参考人脸模型包括与所述人脸图像样本对应的所述参考骨骼参数;数据库确定子模块,用于根据每个所述人脸图像样本对应的参考人脸模型,确定所述参考模型数据库。其中,所述参考模型数据库包括表征每一个所述局部人脸区域的每一种所述图像样式的所述关键点特征与所述参考骨骼参数之间的对应关系。
结合本申请提供的任一装置实施例,所述参考模型创建子模块,包括:图像预处理单元,用于对一幅所述人脸图像样本进行规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像,其中,所述标准人脸图像为与所述标准三维人脸模型对应的二维人脸图像;关键点检测单元,用于对所述预处理人脸图像进行关键点检测,获得所述人脸图像样本的参考关键点集合,所述参考关键点集合包括表征所述人脸图像样本上的各所述局部人脸区域的参考关键点组合;参考模型创建单元,用于基于每个所述参考关键点组合对所述标准三维人脸模型中对应的骨骼参数进行调整,创建所述人脸图像样本对应的所述参考人脸模型。
结合本申请提供的任一装置实施例,所述关键点检测模块包括:关键点定位子模块,用于对所述当前人脸图像进行关键点检测,获得预设数量关键点的位置坐标;关键点特征确定子模块,用于根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的关键点特征。
结合本申请提供的任一装置实施例,所述关键点特征确定子模块包括:坐标组合确定单元,用于基于所述预设数量关键点的位置坐标,确定表征所述当前图像上第一局部人脸区域的关键点坐标组合作为表征该第一局部人脸区域的所述关键点特征,其中,该第一局部人脸区域位所述至少一个局部人脸区域中的任一个;和/或,特征曲线确定单元,用于根据表征该第一局部人脸区域的关键点坐标组合,拟合出表征该第一局部人脸区域的特征曲线作为表征该第一局部人脸区域的所述关键点特征。
结合本申请提供的任一装置实施例,所述参数匹配模块,包括:特征匹配子模块,用于针对所述当前人脸图像中每个局部人脸区域,确定所述参考模型数据库中与所述局 部人脸区域的关键点特征匹配的参考关键点特征,作为所述局部人脸区域的目标参考关键点特征;骨骼参数确定子模块,用于依据与所述当前人脸图像中每个所述局部人脸区域的目标参考关键点特征对应的参考骨骼参数,确定所述当前人脸图像的目标骨骼参数。
结合本申请提供的任一装置实施例,所述特征匹配子模块,包括:相似度确定单元,用于确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度;目标特征确定单元,用于将相似度最高的所述参考关键点特征确定为所述局部人脸区域的所述目标参考关键点特征。
结合本申请提供的任一装置实施例,所述相似度确定单元,包括:曲线拟合子单元,用于根据所述局部人脸区域的关键点坐标组合,拟合出表征所述局部人脸区域的特征曲线;相似度确定子单元,用于根据所述特征曲线与所述参考模型数据库中对应的参考特征曲线之间的距离,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度。
结合本申请提供的任一装置实施例,所述相似度确定单元,包括:局部相似度确定子单元,用于在一个所述局部人脸区域包括至少两个子区域的情况下,针对所述局部人脸区域中的每个所述子区域,针对所述参考模型数据库中的每个人脸图像样本,确定所述子区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应子区域的参考关键点特征之间的相似度,获得所述子区域对应的局部相似度;整体相似度确定子单元,用于针对所述参考模型数据库中的每个人脸图像样本,根据每个所述子区域对应的所述局部相似度,确定所述局部人脸区域与该人脸图像样本中的对应局部人脸区域之间的整体相似度,作为所述局部人脸区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应参考关键点特征之间的相似度。
根据本申请实施例的第三方面,提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面任一项所述的方法。
根据本申请实施例的第四方面,提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现上述第一方面任一项所述的方法。
采用本申请实施例提供的创建脸部模型的方法,计算机系统通过基于表征局部人脸区域的关键点特征自动获取与人脸图像对应的目标骨骼参数,并依据上述目标骨骼参数自动对标准三维人脸模型进行骨骼参数调整,可自动创建出适配所述当前人脸图像的虚拟三维脸部模型。在整个模型创建过程中,无需用户根据自己的主观判断、不断尝试手动调整复杂的骨骼参数,减小了用户操作难度。
在一些实施例中,计算机系统可以预先配置参考模型数据库,进而从参考模型数据库中快速匹配出与人脸图像对应的目标骨骼参数。其中,人脸局部区域特征的规律性使得参考模型数据库的数据量不大,从而使得计算机系统根据当前人脸图像的关键点特征可以从上述参考模型数据库中快速匹配出目标骨骼参数,进而可利用该目标骨骼参数高效且相对准确地创建出与当前人脸图像匹配的虚拟三维脸部模型。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能 限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1是本申请根据一示例性实施例示出的创建脸部模型的方法流程图。
图2是本申请根据一示例性实施例示出的创建脸部模型的应用场景示意图。
图3-1和图3-2是本申请根据另一示例性实施例示出的创建脸部模型的应用场景示意图。
图4是本申请根据另一示例性实施例示出的创建脸部模型的方法流程图。
图5是本申请根据另一示例性实施例示出的创建脸部模型的方法流程图。
图6是本申请根据另一示例性实施例示出的创建脸部模型的方法流程图。
图7是本申请根据另一示例性实施例示出的创建脸部模型的应用场景示意图。
图8是本申请根据另一示例性实施例示出的创建脸部模型的方法流程图。
图9-1、图9-2和图9-3是本申请根据另一示例性实施例示出的创建脸部模型的应用场景示意图。
图10是本申请根据另一示例性实施例示出的创建脸部模型的方法流程图;
图11是本申请根据另一示例性实施例示出的创建脸部模型的方法流程图。
图12是本申请根据另一示例性实施例示出的创建脸部模型的方法流程图。
图13是本申请根据另一示例性实施例示出的创建脸部模型的方法流程图。
图14是本申请根据一示例性实施例示出的创建脸部模型的装置框图。
图15是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图16是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图17是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图18是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图19是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图20是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图21是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图22是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图23是本申请根据另一示例性实施例示出的创建脸部模型的装置框图。
图24是本申请根据另一示例性实施例示出的一种电子设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
在游戏行业和虚拟现实的推动下,数字化虚拟角色得到了大量的应用。以游戏应用场景为例,“虚拟角色”已经从单一的虚拟形象演变为玩家自己设计的角色,从而允许创建更具个性的角色形象。
本申请实施例中,提供了一种基于虚拟骨骼控制方式创建虚拟角色三维脸部模型的方法。该方法涉及的执行主体可包括计算机系统,还可以包括摄像头以采集人脸图像。
上述计算机系统可以设置于服务器、服务器集群或者云平台中,也可以是个人计算机、移动终端等电子设备。上述移动终端可以具体为智能手机、PDA(Personal Digital Assistant,个人数字助理)、平板电脑、游戏机等电子设备。在具体实现过程中,摄像头和计算机系统各自独立,同时又相互联系,以共同实现本申请实施例提供的创建脸部模型的方法。
参见图1根据一示例性实施例示出的创建脸部模型的方法流程图,该方法可以包括:
步骤110,对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征。其中,每个所述关键点特征可表征所述当前人脸图像上的一个或多个局部人脸区域。
以游戏场景为例,游戏应用界面可以提供有用户操作入口。这样,游戏玩家可以通过该用户操作入口输入一幅人脸图像,以期待计算机系统的后台程序可根据该人脸图像创建出相应的虚拟三维人脸模型。换言之,计算机系统可通过捏脸功能基于游戏玩家输入的人脸图像创建一个虚拟三维人脸模型,满足游戏玩家对游戏角色的个性化需求。
上述当前人脸图像可以是游戏玩家拍摄的,也可以是游戏玩家从图片数据库中选取的。上述当前人脸图像可以是针对现实世界中的人拍摄的图像,也可以是手工或采用绘图软件设计的虚拟人物画像。本申请实施例对当前人脸图像的获取方式以及图像中人物在现实世界的真实存在性不作限定。
相应的,计算机系统在接收到用户输入的当前人脸图像后,可以先对当前人脸图像进行规范化处理,获得预设头部姿态和预设图像尺寸的人脸区域图像。比如,利用预先训练好的神经网络进行人脸检测、脸部姿态校正、图像缩放等处理,获得预设图像尺寸并符合预设头部姿态的人脸图像。
然后,计算机系统可利用相关领域技术人员熟知的任意人脸关键点检测方法对上述预处理后的人脸区域图像进行关键点检测,获得当前人脸图像的关键点特征。
本申请中,当前人脸图像的关键点特征可以包括关键点的位置坐标信息,也可以包括根据多个关键点的位置坐标信息拟合而成的、表征局部人脸区域的特征曲线,如眼睑线、唇线等轮廓线。
步骤120,根据所述关键点特征,获取与所述当前人脸图像匹配的目标骨骼参数。
根据一示例,步骤120可具体为,根据所述关键点特征,从参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数。其中,参考模型数据库包括从预设数量的人脸图像样本确定的参考关键点特征以及所述参考关键点特征各自对应的参考骨骼参数。
本申请中,鉴于人脸面部五官具有较强的规律性,每一部位可以采用有限的图像样式进行表征。例如,通过有限的几种眼型可表达大部分人的眼睛特征;采用有限的眉毛样式图像可表征大部分人的眉毛特征。如图2所示,采用十二种眉型可以涵盖大部分人脸部的眉毛特征。
基于此,本申请实施例中,计算机系统可以预先根据一定数量的人脸图像样本确定参考模型数据库。其中,参考模型数据库中包括从人脸图像样本中确定的参考关键点特征以及参考关键点特征各自对应的参考骨骼参数,所述参考骨骼参数可表示用于生成(render)所述人脸图像样本的参考人脸模型。
计算机系统在获取当前人脸图像的关键点特征之后,可以查找出与所述关键点特征最相似的参考关键点特征作为目标参考关键点特征,然后从参考模型数据库中获取与上述目标参考关键点特征对应的参考骨骼参数作为与所述当前人脸图像适配的目标骨骼参数。
需要说明的是,本申请对参考模型数据库的数据结构不作限定。例如,参考模型数据库可以包括表示用于生成人脸图像样本的参考人脸模型的参考骨骼参数以及从所述人脸图像样本中获取的参考关键点特征与所述参考骨骼参数之间的对应关系。
在确定当前人脸图像的关键点特征后,可利用表征预设局部人脸区域的关键点特征匹配参考模型数据库,从中获取该关键点特征对应的参考骨骼参数,作为当前人脸图像中该局部人脸区域的目标骨骼参数。依照上述方式可获得当前人脸图像中各个局部人脸区域的目标骨骼参数,如眼部、嘴部、眉毛、鼻子、脸部轮廓等区域的目标骨骼参数,从而获得适配当前人脸图像的一套目标骨骼参数。
步骤130,依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。
在确定当前人脸图像对应的目标骨骼参数之后,计算机系统可以依据上述目标骨骼 参数对标准三维人脸模型中的骨骼进行参数调整,生成反映当前人脸图像的脸部特征的虚拟三维脸部模型。
其中,该虚拟三维脸部模型可以接近实际人物脸部特征的虚拟三维脸部模型,也可以是反映人物神态的卡通化虚拟三维脸部模型。本申请实施例不限定最后输出的三维脸部模型必须接近现实世界人物的脸部特征。
示例性的,参见图3-1根据一示例性实施例示出的一种标准三维人脸模型的示意图,该标准三维人脸模型属于一种卡通化虚拟三维脸部模型。相应的,图3-2示出了上述标准人脸三维模型的骨骼示意图。整个模型由预设数量的骨骼架构,如61根骨骼。图3-2中每两点之间的连线代表一根骨骼。每个部位涉及一根或多根骨骼,如鼻子部位涉及3根骨骼,通过调整该3根骨骼的参数可以生成不同类型的三维鼻子模型。
可知,本申请实施例中,计算机系统通过基于表征局部人脸区域的关键点特征自动获取与人脸图像对应的目标骨骼参数,并依据上述目标骨骼参数自动对标准三维人脸模型进行骨骼参数调整,可自动创建出适配所述当前人脸图像的虚拟三维脸部模型。在整个模型创建过程中,无需用户根据自己的主观判断、不断尝试手动调整复杂的骨骼参数,减小了用户操作难度。
并且,在一些实施例中,计算机系统可以预先配置参考模型数据库,进而从参考模型数据库中快速匹配出与人脸图像对应的目标骨骼参数。其中,人脸局部区域特征的规律性使得参考模型数据库的数据量不大,从而使得计算机系统根据当前人脸图像的关键点特征可以从参考模型数据库中快速匹配出目标骨骼参数,进而可利用该目标骨骼参数高效且相对准确地创建出与当前人脸图像匹配的虚拟三维脸部模型,具有较好的泛化性。
对于系统中还未建立参考模型数据库的情况,比如,系统首次开机或系统初始化时,上述方法还包括创建参考模型数据库。
参见图4,在上述步骤110之前,所述创建脸部模型的方法还可以包括步骤100,以根据预设数量的人脸图像样本和标准三维人脸模型,确定参考模型数据库。其中,所述参考模型数据库包括从预设数量的人脸图像样本确定的至少一个参考关键点特征以及所述至少一个参考关键点特征各自对应的参考骨骼参数。进一步的,步骤120可包括:根据所述至少一个关键点特征,从上述参考模型库数据库中获取与当前人脸图像匹配的目标骨骼参数。
本申请实施例中,可以获取预设数量的人脸图像样本,并手动标注出上述人脸图像样本中各部位的图像样式。然后,基于上述标注后的人脸图像样本和标准三维人脸模型,通过骨骼控制方式生成对应的虚拟三维人脸模型。本申请实施例中,将根据人脸图像样本和标准三维人脸模型生成的虚拟三维人脸模型,称为参考人脸模型。
示例性的,假设有201个人脸图像样本,则计算机系统会对应生成201个参考人脸模型,并依据该201个参考人脸模型的相关数据生成上述参考模型数据库。
此处需要说明的是,创建参考模型数据库的执行主体与后续应用参考模型数据库创建脸部模型的执行主体不必须是同一计算机系统。比如,创建参考模型数据库的执行主体可以是云端计算机系统,比如云端服务器,而上述步骤110~130的执行主体可以是作 为终端设备的计算机系统。鉴于目前终端设备的计算能力不断增强,同时本申请实施例无需数量巨大的人脸图像样本数据,在本申请另一实施例中,参考模型数据库的确定过程和后续脸部模型创建过程可以都由终端设备的计算机系统执行。
参见图5,上述步骤100可以包括:
步骤101,获取含有预设数量的人脸图像样本的人脸图像样本集合。其中,所述人脸图像样本集合中包括表征至少一个局部人脸区域的多种图像样式。
本申请实施例中,人脸图像样本集合中可以包括一定数量的人脸图像样本。其中,上述一定数量的人脸图像样本中尽可能全面地包含例如额部、眼部、鼻部、唇部等各个人脸部位的不同图像样式,以确保对应生成的参考模型数据库包括尽可能全面的参考数据,如参考关键点特征、参考骨骼参数等。
关于人脸图像样本集合包括人脸图像样本的数量,可以满足以下条件:对于随机采集的一张二维人脸图像A,从上述人脸图像样本集合包含的各人脸部位的图像样式中,可以找到上述图像A中不同局部人脸区域对应的图像样式;或者说,根据从上述一定数量的人脸图像样本中选择性提取不同局部人脸区域如五官部位的图像样式,可以大致拼凑出与上述图像A相似的人脸图像。
本申请实施例中,可根据现实世界中存在的常见五官类型,搜集一定数量的人脸图像样本,获得人脸图像样本集合。
本申请一实施例中,根据现实世界不同人的眉毛、眼睛、鼻子、嘴巴、脸部轮廓的样式,搜集了201张人脸图像样本用于确定上述人脸图像样本集合。该201个人脸图像样本可以包含有每一个局部人脸区域的多种图像样式。其中,局部人脸区域是指从二维人脸图像中识别出的眉毛区域、眼睛区域、鼻子区域、脸部轮廓等区域。
比如,该201个人脸图像样本中包含上述图2所示的眉毛区域对应的12种眉型。以此类推,该201个人脸图像样本包含嘴部、眼部、鼻子、脸部轮廓等局部人脸区域分别对应的多种图像样式。
步骤102,依据标准三维人脸模型创建每个人脸图像样本对应的参考三维人脸模型。
本申请实施例中,将针对每一个人脸图像样本创建的虚拟三维人脸模型称为参考三维人脸模型。每一个参考三维人脸模型对应一套骨骼控制参数。
关于如何基于标准三维人脸模型和人脸图像样本创建参考三维人脸模型,可以参见图6,上述步骤102可以包括:
步骤1021,对人脸图像样本进行规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像。其中,标准人脸图像为标准三维人脸模型对应的二维人脸图像。
本申请中,在参考模型数据库建立阶段,可以针对每一幅人脸图像样本进行人脸区域检测、头部姿态校正、图像缩放等规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像。该预处理人脸图像与标准人脸图像相比,可以理解为同一台相机使用相同的图像采集参数、针对相同物距处、两个头部姿态相同的人,分别采 集的人脸图像。
其中,标准人脸图像可以理解为标准三维人脸模型在预设图像坐标系中的投影图像。或者说,标准三维人脸模型是计算机系统对标准人脸图像进行关键点检测后,根据获得的关键点集合比如240个关键点位置坐标,以及预设数量的骨骼如61根骨骼,创建的虚拟三维人脸模型。
步骤1022,对预处理人脸图像进行关键点检测,获得所述人脸图像样本的参考关键点集合,所述参考关键点集合包括表征所述人脸图像样本上的各个局部人脸区域的参考关键点组合。
在获得预处理人脸图像之后,可采用本领域技术人员熟知的任意关键点检测方法从该预处理人脸图像中提取预设数量的关键点,如68个关键点、106个关键点、或者240个关键点。提取的关键点数量越多越能够实现对局部人脸区域的细节表达。
在对人脸区域图像进行关键点检测的过程中,可以采用预设算法如边缘检测robert算法,索贝尔sobel算法等;也可通过相关模型如主动轮廓线snake模型等等来进行关键点检测。
在本申请另一实施例中,可通过用于进行人脸关键点检测的神经网络进行关键点定位。还可通过第三方应用来进行人脸关键点检测,如通过第三方工具包Dlib来进行人脸关键点定位,检测出68个脸部关键点,如图7所示。进一步地,还可以采用240人脸关键点定位技术定位240个关键点的位置坐标,实现对当前人脸图像和/或人脸图像样本中眉毛、眼睛、鼻子、嘴唇、脸部轮廓、面部表情等关键部位细节特征的定位。
关于参考关键点组合,本申请另一实施例中,可以按照预置规则确定各个参考关键点的序号,并确定表征每一个局部人脸区域的参考关键点组合。例如,在图7所示示例中,从人脸图像样本中提取出了68个关键点;由18~22号参考关键点构成的参考关键点组合表征左眉区域。依此类推,采用不同的关键点组合表征不同的局部人脸区域。
本申请中,每一个关键点的信息包括序号和坐标位置。对于不同的人脸图像样本,表征同一局部人脸区域的关键点序号、数量相同,但关键点的坐标位置不同。如上示例,从标准人脸图像提取的第18~22号关键点组合也表征标准人脸图像中的左眉区域,但各个关键点的坐标位置与图7所示示例中第18~22号关键点的坐标位置不同。
此处需要说明的是,关键点的坐标位置是指关键点在预设图像坐标系如图7所示的XOY坐标系中的位置。由于预处理人脸图像的尺寸相同,所以针对各个预处理人脸图像可以采用相同的图像坐标系,来表示不同预处理人脸图像中关键点的位置坐标,以便于进行后续的距离计算。
步骤1023,基于每个所述参考关键点组合对标准三维人脸模型中对应的骨骼参数进行调整,创建所述人脸图像样本对应的参考人脸模型。这样,人脸图像样本对应的参考人脸模型包括与人脸图像样本对应的参考骨骼参数。换言之,参考骨骼参数可表示用于生成(render)人脸图像样本的参考人脸模型。
本申请实施例中,系统预设有关键点组合与骨骼的映射关系,该映射关系可表示在生成对应的三维人脸模型中关键点组合表征的局部人脸区域时,需要调整哪些骨骼 的参数。
示例性的,假设所述标准三维人脸模型中鼻子区域涉及3根骨骼,可以表示为G1~G3,则可以通过对上述3根骨骼调整参数,判断生成的鼻子三维模型与人脸图像样本中的鼻子形状逼近时,确定鼻子部位的三维模型创建完成。相应的,当前3根骨骼的骨骼控制参数为所述人脸图像样本中鼻子的图像样式对应的参考骨骼参数。
依此类推,通过调整各局部人脸区域的骨骼参数,使得生成的虚拟三维人脸模型满足用户期待时,即完成参考人脸模型的创建。同时,可以确定当前人脸图像样本中、每一参考关键点组合对应的参考骨骼参数,也即所述人脸图像样本中各局部人脸区域的图像样式对应的参考骨骼参数,获得当前人脸图像样本对应的参考人脸模型数据。该参考人脸模型数据可以包括每个局部人脸区域的参考关键点组合和参考骨骼参数之间的对应关系。
本申请实施例中,对一个人脸图像样本成功建立虚拟三维人脸模型即参考人脸模型之后,可以根据表征各个局部人脸区域的参考关键点组合与参考骨骼参数之间的对应关系,获得一个人脸图像样本对应的参考人脸模型数据。
以上述步骤1021~步骤1023描述了依据一幅人脸图像样本创建相应的参考人脸模型的过程。
步骤103,根据每一个人脸图像样本对应的参考人脸模型,确定参考模型数据库。其中,参考模型数据库包括表征每一个局部人脸区域的每一种图像样式的参考关键点特征与参考骨骼参数之间的对应关系。
本申请实施例中,可以按照图6所示的方法创建每一幅人脸图像样本对应的参考人脸模型,进而确定每一幅人脸图像样本对应的参考人脸模型数据。
在获取每一个人脸图像样本的参考人脸模型数据之后,可以建立参考模型数据库。其中,该参考模型数据库可以包含表征每一个局部人脸区域的图像样式的参考关键点组合与参考骨骼参数之间的对应关系、每个人脸图像样本的参考关键点特征数据、以及每个参考人脸模型的参考骨骼参数。需要说明的是,在基于骨骼创建模型的方式中,骨骼存在父子骨骼关系,当父骨骼移动时,会带动子骨骼移动,类似手腕的骨骼运动会带动手掌的骨骼运动。一个局部人脸区域的骨骼参数调整,可能与整个人脸模型中其他骨骼的调整参数相关联。因此,本申请实施例中,参考模型数据库中,以整个参考人脸模型对应的一套参考骨骼参数进行数据存储。
以上对如何建立参考模型数据库进行了详细介绍。
在实际应用阶段,计算机系统对输入的当前人脸图像进行关键点检测,获得关键点特征之后,将自动根据当前人脸图像的关键点特征检索参考模型数据库,从参考模型数据库中匹配出不同局部人脸区域的目标骨骼参数。
关于上述步骤110的实施,参见图8,上述步骤110可以包括:
步骤1101,对当前人脸图像进行关键点检测,获得预设数量关键点的位置坐标。
如上所述,计算机系统可对当前人脸图像进行规范化处理,包括对当前人脸图 像进行人脸区域检测、头部姿态校正、图像缩放等处理,以获得与标准人脸图像尺寸相同的预处理图像。之后,可以采用人脸关键点定位技术对预处理图像进行关键点检测。例如,可采用240人脸关键点定位技术对预处理图像进行关键点检测,获得240个人脸关键点的位置坐标。
步骤1102,根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的关键点特征。
本申请实施例中,在获取当前人脸图像的关键点位置坐标之后,可以确定表征当前人脸图像中至少一个局部人脸区域的关键点特征。
对于一个局部人脸区域比如眉毛区域,其关键点特征可以包括以下至少两种表示方式:
方式一,利用关键点的位置坐标组合表示局部人脸区域的关键点特征。例如,可以将表征一个局部人脸区域的关键点坐标组合,作为该局部人脸区域的关键点特征。如图9-1所示,将序号为18~22的关键点的坐标位置组合确定为左眉区域的关键点特征。
本申请中,采用相对固定的关键点(包括关键点数量和各个关键点的序号)表征局部人脸区域,但在不同人脸图像中,同一序号关键点在图像坐标系中的坐标位置不同。比如,在第一人脸图像中,关键点18的坐标位置为(80,15)即第80行第15列像素定位的位置。在第二人脸图像中,关键点18的坐标位置可以为(100,20)即第100行第20列像素定位的位置。因此,可以利用关键点组合的位置坐标有效区分不同人的脸部特征。
方式二,利用关键点坐标组合的拟合曲线表示局部人脸区域的关键点特征。例如,可以根据表征一个局部人脸区域的关键点坐标组合,拟合出表征该局部人脸区域的特征曲线,并将该特征曲线作为该局部人脸区域的关键点特征。如图9-2所示,将根据序号为18~22的关键点的坐标位置拟合而成的特征曲线作为左眉区域的关键点特征。同理,参见图9-3,根据眼部关键点1~12的位置坐标拟合出眼睑特征曲线,作为左眼的关键点特征。
由于不同人的脸部的各关键点的位置坐标不同,根据关键点位置坐标拟合出的曲线形状也不同。因此,可以将上述特征曲线作为表征当前人脸图像中局部人脸区域的关键点特征,以区分不同人的脸。
关于上述步骤120的实施,可以通过关键点特征之间的相似度从参考模型数据库中查找与当前人脸图像匹配的目标骨骼参数。参见图10,上述步骤120可以包括:
步骤121,针对当前人脸图像中每个局部人脸区域,确定参考模型数据库中与该局部人脸区域的关键点特征匹配的参考关键点特征作为该局部人脸区域的目标参考关键点特征。
参见图11,针对当前人脸图像中的每个局部人脸区域,上述步骤121可以包括:
步骤1211,确定局部人脸区域的关键点特征与参考模型数据库中对应的参考关键点特征之间的相似度。其中,参考模型数据库中对应的参考关键点特征可以是,参考 模型数据库中与该局部人脸区域的位置对应的参考关键点特征。
本申请实施例中,根据关键点特征的表现形式不同,可以采用不同的度量方式从参考模型数据库中查找匹配的参考关键点特征。
例如,在参考模型数据库中存储的参考关键点特征为参考关键点坐标组合的情况下,可以根据关键点坐标组合之间的欧式距离,确定局部人脸区域的关键点特征与参考关键点特征之间的相似度。
以图9-1所示,可以分别计算当前人脸图像中的关键点18~22的位置坐标与任一人脸图像样本中关键点18~22的位置坐标之间的欧式距离,分别表示为l 18、l 19、l 20、l 21、l 22,其中,l 18表示当前人脸图像的关键点18的位置坐标与人脸图像样本中关键点18的位置坐标之间的欧式距离,依此类推。两个图像中左眉区域的相似度可以表示关键点18-22的欧式距离之和L。在一实施例中,L可以表示为:
L=l 18+l 19+l 20+l 21+l 22
在本申请另一实施例中,上述相似度还可以表示为关键点之间欧式距离的加权值。仍如上示例,可以根据实际应用场景,对各个关键点设置预设权重,比如,分别对关键点18~22设置的权重为α 1、α 2、α 3、α 4、α 5,则L可以表示为:
L=α 1*l 182*l 193*l 204*l 215*l 22
两个关键点坐标组合之间的欧式距离越小,说明该两个关键点坐标组合表征的局部人脸区域之间的相似度越高。
在参考模型数据库中存储的参考关键点特征为参考特征曲线的情况下,参见图12,上述步骤1211可以包括:
步骤12111,根据局部人脸区域的关键点坐标组合,拟合出表征该局部人脸区域的特征曲线。
仍如图9-1所示,在从当前人脸图像中确定出第18~22号关键点的位置坐标后,可以按照预置规则如从左往右的顺序拟合出一条特征曲线,如图9-2所示。
步骤12112,根据所述特征曲线与参考模型数据库中对应的参考特征曲线之间的距离,确定该局部人脸区域的关键点特征与参考模型数据库中对应的参考关键点特征之间的相似度。
本申请实施例中,可以利用弗雷歇frechet距离值来度量关键点特征之间的相似性。两个特征曲线之间的frechet距离值越小,说明两条特征曲线的形状越相似,即相似度越高,相应的,两条特征曲线分别对应的局部人脸区域之间的相似度也越大。
在本申请另一实施例中,还可以采用二者相结合的方式来确定上述目标参考关键点特征。具体地,对于当前人脸图像中任一局部人脸区域的关键点坐标组合,可以分别计算该关键点坐标组合与参考模型数据库中每一个对应的参考关键点坐标组合的欧式距离。若所述参考模型数据库中存在至少两个参考关键点坐标组合与该关键点坐标组合的欧式距离值相同,则进一步计算所述至少两个参考关键点坐标组合中的每个所述参考关键点组合与所述关键点坐标组合之间的frechet距离值,从而有效识别出与当前人脸 图像中的该关键点坐标组合对应的特征曲线形状最接近的目标参考关键点特征。
在本申请另一实施例中,还可以根据局部人脸区域的分布特点,采用相应的策略确定所述局部人脸区域的关键点特征与参考模型数据库中对应的参考关键点特征之间的相似度。
例如,在局部人脸区域包括至少两个子区域、也即该局部人脸区域可由至少两个子区域的关键点特征进行表征的情况下,参见图13,在确定该局部人脸区域的关键点特征与参考模型数据库中一个人脸图像样本对应的参考关键点特征之间的相似度时,上述步骤1211可以包括:
步骤1211-1、针对该局部人脸区域中的每个子区域,确定该子区域的关键点特征与参考模型数据库中该人脸图像样本的对应子区域的参考关键点特征之间的相似度,获得该子区域对应的局部相似度。
其中,该人脸图像样本的对应子区域是指,该人脸图像样本中与该局部人脸区域中当前正在处理的子区域的位置对应的子区域。比如,眼睛区域的关键点特征包括左眼区域和右眼区域分别对应的关键点特征。眉毛区域的关键点特征包括左眉区域和右眉区域分别对应的关键点特征。嘴巴区域的关键点特征包括上嘴唇区域和下嘴唇区域分别对应的关键点特征。
以眼睛区域为例,可按照上述任一确定方法左眼区域的关键点特征与参考模型数据库中各个左眼参考关键点特征之间的相似度,以及右眼区域的关键点特征与参考模型数据库中各个右眼参考关键点特征之间的相似度。本申请实施例中,将一个子区域的关键点特征与参考模型数据库中相应子区域的参考关键点特征之间的相似度,称为局部相似度。将当前人脸图像中左、右眼区域的关键点特征与参考模型数据库中一个参考人脸模型的左、右眼区域的参考关键点特征进行比较后,将获得一对局部相似度。
如上述示例,对于由201个人脸图像样本生成的参考模型数据库,针对当前人脸图像中眼部区域的关键点特征,将获得201对局部相似度。
步骤1211-2、根据每个子区域对应的局部相似度,确定所述局部人脸区域与该人脸图像样本的对应局部人脸区域之间的整体相似度,作为该局部人脸区域的关键点特征与参考模型数据库中该人脸图像样本的对应参考关键点特征之间的相似度。
对于如眼睛、嘴巴、眉毛等可以由两个子区域的关键点特征进行表征的局部人脸区域,在计算每个子区域对应的局部相似度之后,可以将两个局部相似度求和或者加权求和,作为当前人脸图像与一个人脸图像样本对应区域的相似度。
本申请实施例中,对于局部人脸区域包括多个子区域的情况,可以基于多个子区域的局部相似度更加准确地比较上述眼睛、眉毛、嘴巴等该局部人脸区域的整体相似度,进而更加准确地从参考模型数据库中确定该局部人脸区域的目标骨骼参数。
步骤1212,将相似度最高的参考关键点特征确定为所述局部人脸区域的目标参考关键点特征。
按照上述步骤1211所述的任一方法,分别计算当前人脸图像中局部人脸区域的 关键点特征与参考模型数据库中每一个人脸图像样本的对应参考关键点特征之间的相似度,并将相似度最高的参考关键点特征确定为当前人脸图像中该局部人脸区域的目标参考关键点特征。
步骤122,依据与当前人脸图像中每个局部人脸区域的目标参考关键点特征对应的参考骨骼参数,确定当前人脸图像的目标骨骼参数。
在本申请一实施例中,参考模型数据库中存储有每个局部人脸区域的每种图像样式对应的参考关键点特征,以及上述参考关键点特征与参考骨骼参数之间的对应关系。
假设可以从每一幅人脸图像划分中出M个局部人脸区域,比如,5个局部人脸区域,分别为眉毛、眼睛、鼻子、嘴巴、脸部轮廓。
一个局部人脸区域对应的图像样式数量为n,则上述参考模型数据库中至少存储有N个参考关键点特征,N=n 1+n 2+……+n M,其中,n i表示局部人脸区域的图像样式数量,i为一个局部人脸区域的标识,i∈(1,M)。示例性的,假设数字1标识眉毛,则n 1表示眉型的数量,对应图2所示的示例中,n 1=12。
每一个参考关键点特征对应一组参考骨骼参数,因此,上述参考模型数据库中至少存储有N个参考关键点特征对应的N组参考骨骼参数。
若上述参考关键点特征是由参考关键点的坐标位置拟合而成的参考特征曲线,则上述参考模型数据库中至少存储有N条参考特征曲线对应的N组参考骨骼参数。
若上述参考关键点特征是参考关键点的坐标位置组合,即参考关键点坐标组合,上述参考模型数据库中至少存储有N个参考关键点坐标组合对应的N组参考骨骼参数。
计算机系统从当前人脸图像中获取局部人脸区域的关键点特征之后,可以确定参考模型数据库中与该关键点特征匹配、例如最相似的参考关键点特征作为目标参考关键点特征,进而根据目标参考关键点特征对应的参考骨骼参数,确定适用于上述关键点特征的目标骨骼参数。比如,直接将目标参考关键点特征对应的参考骨骼参数,确定为适用当前人脸图像中上述关键点特征对应的局部人脸区域的目标骨骼参数。
本申请实施例中,通过结合关键点特征的表达方式采用欧式距离或者frechet距离来度量当前人脸图像中局部人脸区域的关键点特征与参考模型数据库中对应的参考关键点特征之间的相似度,可相对准确、快速地基于参考模型数据库确定当前人脸图像中局部人脸区域的目标骨骼参数,并确定待创建脸部模型的骨骼调整参数,从而可有效提升预设应用场景如游戏场景中扭脸功能应用的用户体验。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于数据处理设备实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
对于前述的各方法实施例,为了简单描述,将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。
其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本公开所必须的。
与前述应用功能实现方法实施例相对应,本公开还提供了应用功能实现装置及相应终端的实施例。
相应的,本申请实施例提供了一种创建脸部模型的装置,参见图14,所述装置可以包括:关键点检测模块21,用于对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征;参数匹配模块22,用于根据所述至少一个关键点特征获取与所述当前人脸图像匹配的目标骨骼参数;模型创建模块23,用于依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。
在本申请一装置实施例中,所述模型创建模块23输出的所述当前人脸图像对应的虚拟三维脸部模型可以是与所述当前人脸图像对应的卡通化虚拟三维人脸模型。
在本申请另一装置实施例中,所述模型创建模块23输出的所述当前人脸图像对应的虚拟三维脸部模型也可以是与所述当前人脸图像中的实际人脸近似的虚拟三维人脸模型,即与现实人脸逼真的虚拟三维人脸模型。
参见图15,在图14所示装置实施例的基础上,所述装置还可以包括:数据库创建模块20,用于根据预设数量的人脸图像样本和所述标准三维人脸模型,确定所述参考模型数据库。其中,所述参考模型数据库包括从预设数量的人脸图像样本确定的至少一个参考关键点特征以及所述至少一个参考关键点特征各自对应的参考骨骼参数。在这种情况下,所述参数匹配模块22具体用于:根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数。
参见图16,在图15所示装置实施例的基础上,所述数据库创建模块20,可以包括:样本获取子模块201,用于获取含有所述预设数量的所述人脸图像样本的人脸图像样本集合,所述人脸图像样本集合中包括表征至少一种局部人脸区域的多种图像样式;参考模型创建子模块202,用于针对每个所述人脸图像样本,依据所述标准三维人脸模型创建所述人脸图像样本对应的参考人脸模型,所述参考人脸模型包括与所述人脸图像样本对应的所述参考骨骼参数;数据库确定子模块203,用于根据每个所述人脸图像样本对应的参考人脸模型,确定所述参考模型数据库。其中,所述参考模型数据库包括表征每一个所述局部人脸区域的每一种所述图像样式的所述关键点特征与所述参考骨骼参数之间的对应关系。
参见图17,在图16所示装置实施例的基础上,所述参考模型创建子模块202,可以包括:图像预处理单元2021,用于对一幅所述人脸图像样本进行规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像,其中,所述标准人脸图像为与所述标准三维人脸模型对应的二维人脸图像;关键点检测单元2022,用于对所述预处理人脸图像进行关键点检测,获得所述人脸图像样本的参考关键点集合,所述参考关键点集合包括表征所述人脸图像样本上的各所述局部人脸区域的参考关键点组合;参考模型创建单元2023,用于基于每个所述参考关键点组合对所述标准三维人脸模型中对应的骨骼参数进行调整,创建所述人脸图像样本对应的所述参考人脸模型。
参见图18,在图14~图17任一所示装置实施例的基础上,所述关键点检测模块21,可以包括:关键点定位子模块211,用于对所述当前人脸图像进行关键点检测,获得预设数量关键点的位置坐标;关键点特征确定子模块212,用于根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的关键点特征。
本申请实施例中,所述关键点特征可以包括关键点坐标组合,和/或,特征曲线。
相应的,参见图19,在图18所示装置实施例的基础上,所述关键点特征确定子模块212,可以包括:坐标组合确定单元2121,用于基于所述预设数量关键点的位置坐标,确定表征所述当前图像上第一局部人脸区域的关键点坐标组合作为表征该第一局部人脸区域的所述关键点特征,其中,该第一局部人脸区域位所述至少一个局部人脸区域中的任一个;特征曲线确定单元2122,用于根据表征该第一局部人脸区域的关键点坐标组合,拟合出表征该第一局部人脸区域的特征曲线作为表征该第一局部人脸区域的所述关键点特征。
本申请中,所述至少一种局部人脸区域包括以下至少一项:眉毛、眼睛、鼻子、嘴巴、脸部轮廓。
上述图19所示装置实施例对应关键点特征确定子模块212包括坐标组合确定单元2121和特征曲线确定单元2122的情况。在本申请另一装置实施例中,关键点特征确定子模块212可以包括坐标组合确定单元2121,或者,特征曲线确定单元2122。
参见图20,在图14~图19任一所示装置实施例的基础上,所述参数匹配模块22,可以包括:特征匹配子模块221,用于针对所述当前人脸图像中每个局部人脸区域,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征,作为所述局部人脸区域的目标参考关键点特征;骨骼参数确定子模块222,用于依据与所述当前人脸图像中每个所述局部人脸区域的目标参考关键点特征对应的参考骨骼参数,确定所述当前人脸图像的目标骨骼参数。
参见图21,在图20所示装置实施例的基础上,所述特征匹配子模块221,可以包括:相似度确定单元2211,用于确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度;目标特征确定单元2212,用于将相似度最高的所述参考关键点特征确定为所述局部人脸区域的所述目标参考关键点特征。
在本申请一装置实施例中,所述关键点特征可以为根据关键点位置坐标拟合成的特征曲线。相应的,参见图22,在图21所示装置实施例的基础上,所述相似度确定单元2211,可以包括:曲线拟合子单元2201,用于根据一个局部人脸区域的关键点坐标组合,拟合出表征所述局部人脸区域的特征曲线;相似度确定子单元2202,用于根据所述特征曲线与所述参考模型数据库中对应的参考特征曲线之间的距离,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度。
本申请实施例中,所述距离可以包括欧式距离或者弗雷歇frechet距离。相应的,所述目标特征确定单元2212,可以用于将所述距离值最小的参考特征曲线确定为所述目标参考关键点特征。
参见图23,在图21所示装置实施例的基础上,所述相似度确定单元2211,可 以包括:局部相似度确定子单元22111,用于在一个所述局部人脸区域包括至少两个子区域的情况下,针对所述局部人脸区域中的每个所述子区域,针对所述参考模型数据库中的每个人脸图像样本,确定所述子区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应子区域的参考关键点特征之间的相似度,获得所述子区域对应的局部相似度;整体相似度确定子单元22112,用于针对所述参考模型数据库中的每个人脸图像样本,根据每个所述子区域对应的所述局部相似度,确定所述局部人脸区域与该人脸图像样本中的对应局部人脸区域之间的整体相似度,作为所述局部人脸区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应参考关键点特征之间的相似度。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
对应于上述的创建脸部模型的方法,本申请实施例还提出了根据本申请的一示例性实施例的电子设备的示意结构图。请参考图24,在硬件层面,该电子设备包括处理器241、内部总线242、网络接口243、内存245以及非易失性存储器246,当然还可能包括其他业务所需要的硬件。处理器241从非易失性存储器246中读取对应的计算机程序到内存245中然后运行,在逻辑层面上形成智能驾驶控制的装置。当然,除了软件实现方式之外,本申请并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
本领域技术人员应明白,本说明书一个或多个实施例可提供为方法、系统或计算机程序产品。因此,本说明书一个或多个实施例可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书实施例还提供一种计算机可读存储介质,该存储介质上可以存储有计算机程序,所述程序被处理器执行时实现本说明书图1至图13任一实施例提供的创建脸部模型的方法的步骤。
本说明书中描述的主题及功能操作的实施例可以在以下中实现:数字电子电路、有形体现的计算机软件或固件、包括本说明书中公开的结构及其结构性等同物的计算机硬件、或者它们中的一个或多个的组合。本说明书中描述的主题的实施例可以实现为一个或多个计算机程序,即编码在有形非暂时性程序载体上以被数据处理装置执行或控制数据处理装置的操作的计算机程序指令中的一个或多个模块。可替代地或附加地,程序指令可以被编码在人工生成的传播信号上,例如机器生成的电、光或电磁信号,该信号被生成以将信息编码并传输到合适的接收机装置以由数据处理装置执行。计算机存储介质可以是机器可读存储设备、机器可读存储基板、随机或串行存取存储器设备、或它们中的一个或多个的组合。
本说明书中描述的处理及逻辑流程可以由执行一个或多个计算机程序的一个或多个可编程计算机执行,以通过根据输入数据进行操作并生成输出来执行相应的功能。所述处理及逻辑流程还可以由专用逻辑电路—例如FPGA(现场可编程门阵列)或ASIC(专用集成电路)来执行,并且装置也可以实现为专用逻辑电路。
适合用于执行计算机程序的计算机包括,例如通用和/或专用微处理器,或任何其他类型的中央处理单元。通常,中央处理单元将从只读存储器和/或随机存取存储器接收指令和数据。计算机的基本组件包括用于实施或执行指令的中央处理单元以及用于存储指令和数据的一个或多个存储器设备。通常,计算机还将包括用于存储数据的一个或多个大容量存储设备,例如磁盘、磁光盘或光盘等,或者计算机将可操作地与此大容量存储设备耦接以从其接收数据或向其传送数据,抑或两种情况兼而有之。然而,计算机不是必须具有这样的设备。此外,计算机可以嵌入在另一设备中,例如移动电话、个人数字助理(PDA)、移动音频或视频播放器、游戏操纵台、全球定位系统(GPS)接收机、或例如通用串行总线(USB)闪存驱动器的便携式存储设备,仅举几例。
适合于存储计算机程序指令和数据的计算机可读介质包括所有形式的非易失性存储器、媒介和存储器设备,例如包括半导体存储器设备(例如EPROM、EEPROM和闪存设备)、磁盘(例如内部硬盘或可移动盘)、磁光盘以及CD ROM和DVD-ROM盘。处理器和存储器可由专用逻辑电路补充或并入专用逻辑电路中。
虽然本说明书包含许多具体实施细节,但是这些不应被解释为限制任何发明的范围或所要求保护的范围,而是主要用于描述特定发明的具体实施例的特征。本说明书内在多个实施例中描述的某些特征也可以在单个实施例中被组合实施。另一方面,在单个实施例中描述的各种特征也可以在多个实施例中分开实施或以任何合适的子组合来实施。此外,虽然特征可以如上所述在某些组合中起作用并且甚至最初如此要求保护,但是来自所要求保护的组合中的一个或多个特征在一些情况下可以从该组合中去除,并且所要求保护的组合可以指向子组合或子组合的变型。
类似地,虽然在附图中以特定顺序描绘了操作,但是这不应被理解为要求这些操作以所示的特定顺序执行或顺次执行、或者要求所有例示的操作被执行,以实现期望的结果。在某些情况下,多任务和并行处理可能是有利的。此外,上述实施例中的各种系统模块和组件的分离不应被理解为在所有实施例中均需要这样的分离,并且应当理解,所描述的程序组件和系统通常可以一起集成在单个软件产品中,或封装成多个软件产品。
由此,主题的特定实施例已被描述。其他实施例在所附权利要求书的范围以内。在某些情况下,权利要求书中记载的动作可以以不同的顺序执行并且仍实现期望的结果。此外,附图中描绘的处理并非必需所示的特定顺序或顺次顺序,以实现期望的结果。在某些实现中,多任务和并行处理可能是有利的。
以上所述仅为本说明书一个或多个实施例的较佳实施例而已,并不用以限制本说明书一个或多个实施例,凡在本说明书一个或多个实施例的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本说明书一个或多个实施例保护的范围之内。

Claims (22)

  1. 一种创建脸部模型的方法,包括:
    对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征;
    根据所述至少一个关键点特征,获取与所述当前人脸图像匹配的目标骨骼参数;
    依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。
  2. 根据权利要求1所述的方法,其特征在于,
    所述方法还包括:根据预设数量的人脸图像样本和所述标准三维人脸模型,确定参考模型数据库,其中,所述参考模型数据库包括从预设数量的人脸图像样本确定的至少一个参考关键点特征以及所述至少一个参考关键点特征各自对应的参考骨骼参数;
    根据所述至少一个关键点特征,获取与所述当前人脸图像匹配的目标骨骼参数,包括:根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数。
  3. 根据权利要求2所述的方法,其特征在于,根据所述预设数量的人脸图像样本和所述标准三维人脸模型,确定所述参考模型数据库,包括:
    获取含有所述预设数量的人脸图像样本的人脸图像样本集合,所述人脸图像样本集合中包括表征至少一个局部人脸区域的多种图像样式;
    针对每个所述人脸图像样本,依据所述标准三维人脸模型创建所述人脸图像样本对应的参考人脸模型,所述参考人脸模型包括与所述人脸图像样本对应的所述参考骨骼参数;
    根据每个所述人脸图像样本对应的参考人脸模型,确定所述参考模型数据库,
    其中,所述参考模型数据库包括表征每一个所述局部人脸区域的每一种所述图像样式的所述参考关键点特征与所述参考骨骼参数之间的对应关系。
  4. 根据权利要求3所述的方法,其特征在于,依据所述标准三维人脸模型,创建所述人脸图像样本对应的所述参考人脸模型,包括:
    对所述人脸图像样本进行规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像,其中,所述标准人脸图像为与所述标准三维人脸模型对应的二维人脸图像;
    对所述预处理人脸图像进行关键点检测,获得所述人脸图像样本的参考关键点集合,所述参考关键点集合包括表征所述人脸图像样本上的各所述局部人脸区域的参考关键点组合;
    基于每个所述参考关键点组合对所述标准三维人脸模型中对应的骨骼参数进行调整,创建所述人脸图像样本对应的所述参考人脸模型。
  5. 根据权利要求1~4任一所述的方法,其特征在于,所述对所述当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征,包括:
    对所述当前人脸图像进行关键点检测,获得预设数量关键点的位置坐标;
    根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的关键点特征。
  6. 根据权利要求5所述的方法,其特征在于,根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的所述关键点特征,包括:
    基于所述预设数量关键点的位置坐标,确定表征所述当前图像上第一局部人脸区域的关键点坐标组合作为表征该第一局部人脸区域的所述关键点特征,其中,该第一局部人脸区域为所述至少一个局部人脸区域中的任一个;和/或,
    根据表征该第一局部人脸区域的所述关键点坐标组合,拟合出表征该第一局部人脸区域的特征曲线作为表征该第一局部人脸区域的所述关键点特征。
  7. 根据权利要求2~6任一所述的方法,其特征在于,根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数,包括:
    针对所述当前人脸图像中每个局部人脸区域,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征作为所述局部人脸区域的目标参考关键点特征;
    依据与所述当前人脸图像中每个所述局部人脸区域的目标参考关键点特征对应的参考骨骼参数,确定所述当前人脸图像的目标骨骼参数。
  8. 根据权利要求7所述的方法,其特征在于,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征作为所述局部人脸区域的目标参考关键点特征,包括:
    确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度;
    将相似度最高的所述参考关键点特征确定为所述局部人脸区域的所述目标参考关键点特征。
  9. 根据权利要求8所述的方法,其特征在于,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度,包括:
    根据所述局部人脸区域的关键点坐标组合,拟合出表征所述局部人脸区域的特征曲线;
    根据所述特征曲线与所述参考模型数据库中对应的参考特征曲线之间的距离,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度。
  10. 根据权利要求8或9所述的方法,其特征在于,在所述局部人脸区域包括至少两个子区域的情况下,确定所述局部人脸区域的关键点特征与所述参考模型数据库中一个人脸图像样本的对应参考关键点特征之间的相似度,包括:
    针对所述局部人脸区域中的每个所述子区域,确定所述子区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应子区域的参考关键点特征之间的相似度,获得所述子区域对应的局部相似度;
    根据每个所述子区域对应的所述局部相似度,确定所述局部人脸区域与该人脸图像样本中的对应局部人脸区域之间的整体相似度,作为所述局部人脸区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应参考关键点特征之间的相似度。
  11. 一种创建脸部模型的装置,包括:
    关键点检测模块,用于对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征;
    参数匹配模块,用于根据所述至少一个关键点特征获取与所述当前人脸图像匹配的目标骨骼参数;
    模型创建模块,用于依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。
  12. 根据权利要求11所述的装置,其特征在于,
    所述装置还包括数据库创建模块,用于根据预设数量的人脸图像样本和所述标准三维人脸模型,确定所述参考模型数据库,其中,所述参考模型数据库包括从预设数量的人脸图像样本确定的至少一个参考关键点特征以及所述至少一个参考关键点特征各自对应的参考骨骼参数;
    所述参数匹配模块具体用于:根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数。
  13. 根据权利要求12所述的装置,其特征在于,所述数据库创建模块,包括:
    样本获取子模块,用于获取含有所述预设数量的所述人脸图像样本的人脸图像样本集合,所述人脸图像样本集合中包括表征至少一种局部人脸区域的多种图像样式;
    参考模型创建子模块,用于针对每个所述人脸图像样本,依据所述标准三维人脸模型创建所述人脸图像样本对应的参考人脸模型,所述参考人脸模型包括与所述人脸图像样本对应的所述参考骨骼参数;
    数据库确定子模块,用于根据每个所述人脸图像样本对应的参考人脸模型,确定所述参考模型数据库,
    其中,所述参考模型数据库包括表征每一个所述局部人脸区域的每一种所述图像样式的所述关键点特征与所述参考骨骼参数之间的对应关系。
  14. 根据权利要求13所述的装置,其特征在于,所述参考模型创建子模块,包括:
    图像预处理单元,用于对一幅所述人脸图像样本进行规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像,其中,所述标准人脸图像为与所述标准三维人脸模型对应的二维人脸图像;
    关键点检测单元,用于对所述预处理人脸图像进行关键点检测,获得所述人脸图像样本的参考关键点集合,所述参考关键点集合包括表征所述人脸图像样本上的各所述局部人脸区域的参考关键点组合;
    参考模型创建单元,用于基于每个所述参考关键点组合对所述标准三维人脸模型中对应的骨骼参数进行调整,创建所述人脸图像样本对应的所述参考人脸模型。
  15. 根据权利要求11~14任一所述的装置,其特征在于,所述关键点检测模块包括:
    关键点定位子模块,用于对所述当前人脸图像进行关键点检测,获得预设数量关键点的位置坐标;
    关键点特征确定子模块,用于根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的关键点特征。
  16. 根据权利要求15所述的装置,其特征在于,所述关键点特征确定子模块包括:
    坐标组合确定单元,用于基于所述预设数量关键点的位置坐标,确定表征所述当前图像上第一局部人脸区域的关键点坐标组合作为表征该第一局部人脸区域的所述关键点特征,其中,该第一局部人脸区域位所述至少一个局部人脸区域中的任一个;和/或,
    特征曲线确定单元,用于根据表征该第一局部人脸区域的关键点坐标组合,拟合出表征该第一局部人脸区域的特征曲线作为表征该第一局部人脸区域的所述关键点特征。
  17. 根据权利要求12~16任一所述的装置,其特征在于,所述参数匹配模块,包括:
    特征匹配子模块,用于针对所述当前人脸图像中每个局部人脸区域,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征,作为所述局部人脸区域的目标参考关键点特征;
    骨骼参数确定子模块,用于依据与所述当前人脸图像中每个所述局部人脸区域的目标参考关键点特征对应的参考骨骼参数,确定所述当前人脸图像的目标骨骼参数。
  18. 根据权利要求17所述的装置,其特征在于,所述特征匹配子模块,包括:
    相似度确定单元,用于确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度;
    目标特征确定单元,用于将相似度最高的所述参考关键点特征确定为所述局部人脸区域的所述目标参考关键点特征。
  19. 根据权利要求18所述的装置,其特征在于,所述相似度确定单元,包括:
    曲线拟合子单元,用于根据所述局部人脸区域的关键点坐标组合,拟合出表征所述局部人脸区域的特征曲线;
    相似度确定子单元,用于根据所述特征曲线与所述参考模型数据库中对应的参考特征曲线之间的距离,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度。
  20. 根据权利要求18或19所述的装置,其特征在于,所述相似度确定单元,包括:
    局部相似度确定子单元,用于在一个所述局部人脸区域包括至少两个子区域的情况下,针对所述局部人脸区域中的每个所述子区域,针对所述参考模型数据库中的每个人脸图像样本,确定所述子区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应子区域的参考关键点特征之间的相似度,获得所述子区域对应的局部相似度;
    整体相似度确定子单元,用于针对所述参考模型数据库中的每个人脸图像样本,根据每个所述子区域对应的所述局部相似度,确定所述局部人脸区域与该人脸图像样本中的对应局部人脸区域之间的整体相似度,作为所述局部人脸区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应参考关键点特征之间的相似度。
  21. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现上述权利要求1-10中任一项所述的方法。
  22. 一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现上述权利要求1-10中任一项所述的方法。
PCT/CN2020/076134 2019-05-15 2020-02-21 一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质 WO2020228389A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021516410A JP7191213B2 (ja) 2019-05-15 2020-02-21 顔モデルの生成方法、装置、電子機器及びコンピュータ可読記憶媒体
KR1020217008646A KR102523512B1 (ko) 2019-05-15 2020-02-21 얼굴 모델의 생성
SG11202103190VA SG11202103190VA (en) 2019-05-15 2020-02-21 Face model creation
US17/212,523 US11836943B2 (en) 2019-05-15 2021-03-25 Virtual face model creation based on key point

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910403884.8 2019-05-15
CN201910403884.8A CN110111418B (zh) 2019-05-15 2019-05-15 创建脸部模型的方法、装置及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/212,523 Continuation US11836943B2 (en) 2019-05-15 2021-03-25 Virtual face model creation based on key point

Publications (1)

Publication Number Publication Date
WO2020228389A1 true WO2020228389A1 (zh) 2020-11-19

Family

ID=67490284

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/076134 WO2020228389A1 (zh) 2019-05-15 2020-02-21 一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质

Country Status (7)

Country Link
US (1) US11836943B2 (zh)
JP (1) JP7191213B2 (zh)
KR (1) KR102523512B1 (zh)
CN (1) CN110111418B (zh)
SG (1) SG11202103190VA (zh)
TW (1) TW202044202A (zh)
WO (1) WO2020228389A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112973122A (zh) * 2021-03-02 2021-06-18 网易(杭州)网络有限公司 游戏角色上妆方法、装置及电子设备

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108615014B (zh) * 2018-04-27 2022-06-21 京东方科技集团股份有限公司 一种眼睛状态的检测方法、装置、设备和介质
CN110111418B (zh) * 2019-05-15 2022-02-25 北京市商汤科技开发有限公司 创建脸部模型的方法、装置及电子设备
CN110675475B (zh) * 2019-08-19 2024-02-20 腾讯科技(深圳)有限公司 一种人脸模型生成方法、装置、设备及存储介质
CN110503700A (zh) * 2019-08-26 2019-11-26 北京达佳互联信息技术有限公司 生成虚拟表情的方法、装置、电子设备及存储介质
CN110705448B (zh) * 2019-09-27 2023-01-20 北京市商汤科技开发有限公司 一种人体检测方法及装置
CN110738157A (zh) * 2019-10-10 2020-01-31 南京地平线机器人技术有限公司 虚拟面部的构建方法及装置
CN111354079B (zh) * 2020-03-11 2023-05-02 腾讯科技(深圳)有限公司 三维人脸重建网络训练及虚拟人脸形象生成方法和装置
CN111640204B (zh) * 2020-05-14 2024-03-19 广东小天才科技有限公司 三维对象模型的构建方法、构建装置、电子设备及介质
CN111738087B (zh) * 2020-05-25 2023-07-25 完美世界(北京)软件科技发展有限公司 一种游戏角色面部模型的生成方法和装置
CN111652798B (zh) * 2020-05-26 2023-09-29 浙江大华技术股份有限公司 人脸姿态迁移方法和计算机存储介质
CN111714885A (zh) * 2020-06-22 2020-09-29 网易(杭州)网络有限公司 游戏角色模型生成、角色调整方法、装置、设备及介质
CN112232183B (zh) * 2020-10-14 2023-04-28 抖音视界有限公司 虚拟佩戴物匹配方法、装置、电子设备和计算机可读介质
CN114727002A (zh) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 拍摄方法、装置、终端设备及存储介质
CN112767348B (zh) * 2021-01-18 2023-11-24 上海明略人工智能(集团)有限公司 一种检测信息的确定方法和装置
CN112967364A (zh) * 2021-02-09 2021-06-15 咪咕文化科技有限公司 一种图像处理方法、装置及设备
KR102334666B1 (ko) 2021-05-20 2021-12-07 알레시오 주식회사 얼굴 이미지 생성 방법
CN114299595A (zh) * 2022-01-29 2022-04-08 北京百度网讯科技有限公司 人脸识别方法、装置、设备、存储介质和程序产品
KR20230151821A (ko) 2022-04-26 2023-11-02 주식회사 리본소프트 메타버스에 이용되는 3차원 미형 캐릭터 생성 시스템 및 방법
KR102494222B1 (ko) * 2022-09-13 2023-02-06 주식회사 비브스튜디오스 자동 3d 눈썹 생성 방법
CN115393532B (zh) * 2022-10-27 2023-03-14 科大讯飞股份有限公司 脸部绑定方法、装置、设备及存储介质
KR20240069548A (ko) 2022-11-10 2024-05-20 알레시오 주식회사 이미지 변환 모델 제공 방법, 서버 및 컴퓨터 프로그램
CN115797569B (zh) * 2023-01-31 2023-05-02 盾钰(上海)互联网科技有限公司 高精度数孪人面部表情动作细分的动态生成方法及系统
CN117152308B (zh) * 2023-09-05 2024-03-22 江苏八点八智能科技有限公司 一种虚拟人动作表情优化方法与系统
CN117593493A (zh) * 2023-09-27 2024-02-23 书行科技(北京)有限公司 三维脸部拟合方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037836A1 (en) * 2006-08-09 2008-02-14 Arcsoft, Inc. Method for driving virtual facial expressions by automatically detecting facial expressions of a face image
CN107705365A (zh) * 2017-09-08 2018-02-16 郭睿 可编辑的三维人体模型创建方法、装置、电子设备及计算机程序产品
CN109671016A (zh) * 2018-12-25 2019-04-23 网易(杭州)网络有限公司 人脸模型的生成方法、装置、存储介质及终端
CN110111418A (zh) * 2019-05-15 2019-08-09 北京市商汤科技开发有限公司 创建脸部模型的方法、装置及电子设备
CN110675475A (zh) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 一种人脸模型生成方法、装置、设备及存储介质

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7856125B2 (en) * 2006-01-31 2010-12-21 University Of Southern California 3D face reconstruction from 2D images
TWI427545B (zh) 2009-11-16 2014-02-21 Univ Nat Cheng Kung 以尺度不變特徵轉換和人臉角度估測為基礎的人臉辨識方法
TWI443601B (zh) 2009-12-16 2014-07-01 Ind Tech Res Inst 擬真臉部動畫系統及其方法
US8550818B2 (en) * 2010-05-21 2013-10-08 Photometria, Inc. System and method for providing and modifying a personalized face chart
JP2014199536A (ja) * 2013-03-29 2014-10-23 株式会社コナミデジタルエンタテインメント 顔モデル生成装置、顔モデル生成装置の制御方法、及びプログラム
CN104715227B (zh) * 2013-12-13 2020-04-03 北京三星通信技术研究有限公司 人脸关键点的定位方法和装置
KR102357340B1 (ko) * 2014-09-05 2022-02-03 삼성전자주식회사 얼굴 인식 방법 및 장치
KR101997500B1 (ko) * 2014-11-25 2019-07-08 삼성전자주식회사 개인화된 3d 얼굴 모델 생성 방법 및 장치
EP3335195A2 (en) * 2015-08-14 2018-06-20 Metail Limited Methods of generating personalized 3d head models or 3d body models
CN107025678A (zh) * 2016-01-29 2017-08-08 掌赢信息科技(上海)有限公司 一种3d虚拟模型的驱动方法及装置
KR101757642B1 (ko) * 2016-07-20 2017-07-13 (주)레벨소프트 3d 얼굴 모델링 장치 및 방법
CN106652025B (zh) * 2016-12-20 2019-10-01 五邑大学 一种基于视频流与人脸多属性匹配的三维人脸建模方法和打印装置
CN108960020A (zh) * 2017-05-27 2018-12-07 富士通株式会社 信息处理方法和信息处理设备
US10796468B2 (en) * 2018-02-26 2020-10-06 Didimo, Inc. Automatic rig creation process
WO2019209431A1 (en) * 2018-04-23 2019-10-31 Magic Leap, Inc. Avatar facial expression representation in multidimensional space
US10706556B2 (en) * 2018-05-09 2020-07-07 Microsoft Technology Licensing, Llc Skeleton-based supplementation for foreground image segmentation
WO2019216593A1 (en) * 2018-05-11 2019-11-14 Samsung Electronics Co., Ltd. Method and apparatus for pose processing
CN109685892A (zh) * 2018-12-31 2019-04-26 南京邮电大学盐城大数据研究院有限公司 一种快速3d人脸构建系统及构建方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037836A1 (en) * 2006-08-09 2008-02-14 Arcsoft, Inc. Method for driving virtual facial expressions by automatically detecting facial expressions of a face image
CN107705365A (zh) * 2017-09-08 2018-02-16 郭睿 可编辑的三维人体模型创建方法、装置、电子设备及计算机程序产品
CN109671016A (zh) * 2018-12-25 2019-04-23 网易(杭州)网络有限公司 人脸模型的生成方法、装置、存储介质及终端
CN110111418A (zh) * 2019-05-15 2019-08-09 北京市商汤科技开发有限公司 创建脸部模型的方法、装置及电子设备
CN110675475A (zh) * 2019-08-19 2020-01-10 腾讯科技(深圳)有限公司 一种人脸模型生成方法、装置、设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112973122A (zh) * 2021-03-02 2021-06-18 网易(杭州)网络有限公司 游戏角色上妆方法、装置及电子设备

Also Published As

Publication number Publication date
CN110111418A (zh) 2019-08-09
US11836943B2 (en) 2023-12-05
SG11202103190VA (en) 2021-04-29
KR20210047920A (ko) 2021-04-30
US20210209851A1 (en) 2021-07-08
CN110111418B (zh) 2022-02-25
TW202044202A (zh) 2020-12-01
KR102523512B1 (ko) 2023-04-18
JP7191213B2 (ja) 2022-12-16
JP2022500790A (ja) 2022-01-04

Similar Documents

Publication Publication Date Title
WO2020228389A1 (zh) 一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质
US10163010B2 (en) Eye pose identification using eye features
CN111354079A (zh) 三维人脸重建网络训练及虚拟人脸形象生成方法和装置
US20170161551A1 (en) Face key point positioning method and terminal
US20220148333A1 (en) Method and system for estimating eye-related geometric parameters of a user
CN111325846B (zh) 表情基确定方法、虚拟形象驱动方法、装置及介质
CN108135469A (zh) 使用眼睛姿态测量的眼睑形状估计
US11282257B2 (en) Pose selection and animation of characters using video data and training techniques
Ming Robust regional bounding spherical descriptor for 3D face recognition and emotion analysis
CN107911643B (zh) 一种视频通信中展现场景特效的方法和装置
CN104573634A (zh) 一种三维人脸识别方法
WO2020037963A1 (zh) 脸部图像识别的方法、装置及存储介质
JP2020177615A (ja) アバター用の3d顔モデルを生成する方法及び関連デバイス
CN113570684A (zh) 图像处理方法、装置、计算机设备和存储介质
JP2020177620A (ja) アバター用の3d顔モデルを生成する方法及び関連デバイス
CN111815768B (zh) 三维人脸重建方法和装置
CN111108508A (zh) 脸部情感识别方法、智能装置和计算机可读存储介质
CN108174141B (zh) 一种视频通信的方法和一种移动装置
Kim et al. Real-time facial feature extraction scheme using cascaded networks
US11361467B2 (en) Pose selection and animation of characters using video data and training techniques
KR102160955B1 (ko) 딥 러닝 기반 3d 데이터 생성 방법 및 장치
Xu et al. A novel method for hand posture recognition based on depth information descriptor
US9786030B1 (en) Providing focal length adjustments
WO2023124869A1 (zh) 用于活体检测的方法、装置、设备及存储介质
CN111222448B (zh) 图像转换方法及相关产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20804755

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021516410

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20217008646

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20804755

Country of ref document: EP

Kind code of ref document: A1