WO2020228389A1 - 一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质 - Google Patents
一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2020228389A1 WO2020228389A1 PCT/CN2020/076134 CN2020076134W WO2020228389A1 WO 2020228389 A1 WO2020228389 A1 WO 2020228389A1 CN 2020076134 W CN2020076134 W CN 2020076134W WO 2020228389 A1 WO2020228389 A1 WO 2020228389A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- key point
- face
- face image
- point feature
- partial
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000001815 facial effect Effects 0.000 title abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 42
- 210000000988 bone and bone Anatomy 0.000 claims description 142
- 238000012545 processing Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 14
- 238000010606 normalization Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 210000004709 eyebrow Anatomy 0.000 description 25
- 238000010586 diagram Methods 0.000 description 18
- 210000003128 head Anatomy 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 241000270295 Serpentes Species 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003997 social interaction Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/653—Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- This application relates to the field of three-dimensional modeling technology, and in particular to methods, devices and electronic equipment for creating face models.
- pinch face refers to the creation of a three-dimensional face model of a virtual character.
- the embodiments of the application provide a method, device and electronic device for creating a face model.
- a method for creating a face model including: performing key point detection on a current face image to obtain at least one key point feature of the current face image; At least one key point feature is used to obtain target bone parameters matching the current face image; and based on the target bone parameters and a standard three-dimensional face model, a virtual three-dimensional face model corresponding to the current face image is created.
- the method further includes: determining a reference model database according to a preset number of face image samples and the standard three-dimensional face model, wherein the reference model database includes a preset number It is assumed that at least one reference key point feature determined by a number of face image samples and a reference bone parameter corresponding to each of the at least one reference key point feature.
- obtaining a target bone parameter that matches the current face image includes: according to the at least one key point feature, obtaining from the reference model database the same as the current person Target bone parameters for face image matching.
- determining the reference model database according to the preset number of face image samples and the standard three-dimensional face model includes: obtaining the preset number of faces A face image sample set of image samples, where the face image sample set includes multiple image styles representing at least one partial face area; for each of the face image samples, it is created based on the standard three-dimensional face model A reference face model corresponding to the face image sample, where the reference face model includes the reference bone parameter corresponding to the face image sample; according to the reference face model corresponding to each face image sample To determine the reference model database.
- the reference model database includes the corresponding relationship between the reference key point features and the reference bone parameters that characterize each of the image styles of each of the partial face regions.
- creating the reference face model corresponding to the face image sample according to the standard three-dimensional face model includes: normalizing the face image sample to obtain A preprocessed face image conforming to the head posture and image size of the standard face image, wherein the standard face image is a two-dimensional face image corresponding to the standard three-dimensional face model; Performing key point detection on the face image to obtain a reference key point set of the face image sample, where the reference key point set includes a reference key point combination that characterizes each of the partial face regions on the face image sample; The corresponding bone parameters in the standard three-dimensional face model are adjusted based on each of the reference key point combinations, and the reference face model corresponding to the face image sample is created.
- the performing key point detection on the current face image to obtain at least one key point feature of the current face image includes: keying the current face image Point detection to obtain the position coordinates of a preset number of key points; and determine a key point feature representing at least one partial face area on the current face image according to the position coordinates of the preset number of key points.
- determining the key point feature representing at least one partial face region on the current face image according to the position coordinates of the preset number of key points includes: The position coordinates of a preset number of key points are determined, and a combination of key point coordinates characterizing the first partial face area on the current image is determined as the key point feature characterizing the first partial face area, wherein the first partial person The face area is any one of the at least one partial face area; and/or, according to the key point coordinate combination that characterizes the first partial face area, a feature characterizing the first partial face area is fitted The curve serves as the key point feature that characterizes the first partial face area.
- obtaining target bone parameters matching the current face image from the reference model database includes: for the current face image Determine the reference key point feature matching the key point feature of the local face area in the reference model database as the target reference key point feature of the local face area; The reference bone parameter corresponding to the target reference key point feature of each of the partial face regions in the current face image determines the target bone parameter of the current face image.
- determining the reference key point feature matching the key point feature of the partial face area in the reference model database as the target reference key point feature of the partial face area includes : Determine the similarity between the key point feature of the local face area and the corresponding reference key point feature in the reference model database; determine the reference key point feature with the highest similarity as the local face area The target reference key point feature.
- determining the similarity between the key point feature of the local face area and the corresponding reference key point feature in the reference model database includes: according to the local face area The key point coordinate combination of, fits the characteristic curve that characterizes the local face area; according to the distance between the characteristic curve and the corresponding reference characteristic curve in the reference model database, the local face area is determined The similarity between the key point feature and the corresponding reference key point feature in the reference model database.
- the key point feature of the partial face area is determined to be a face image sample in the reference model database
- the similarity between the corresponding reference key point features in the local face region includes: for each of the sub-regions in the partial face region, determining the key point feature of the sub-region and the face image in the reference model database The similarity between the reference key point features of the corresponding sub-regions of the sample to obtain the local similarity corresponding to the sub-region; according to the local similarity corresponding to each of the sub-regions, it is determined that the local face region and The overall similarity between the corresponding partial face regions in the face image sample is used as the difference between the key point feature of the partial face region and the corresponding reference key point feature of the face image sample in the reference model database The similarity.
- an apparatus for creating a face model including: a key point detection module, configured to perform key point detection on a current face image to obtain the current face At least one key point feature of the image; a parameter matching module for obtaining target bone parameters matching the current face image according to the at least one key point feature; a model creation module for obtaining target bone parameters according to the target bone parameters and standards
- the three-dimensional face model creates a virtual three-dimensional face model corresponding to the current face image.
- the device further includes a database creation module for determining the reference model database based on a preset number of face image samples and the standard three-dimensional face model, wherein
- the reference model database includes at least one reference key point feature determined from a preset number of face image samples and reference bone parameters corresponding to each of the at least one reference key point feature.
- the parameter matching module is specifically configured to obtain target bone parameters matching the current face image from the reference model database according to the at least one key feature.
- the database creation module includes: a sample acquisition sub-module for acquiring a face image sample set containing the preset number of the face image samples, the person The face image sample set includes multiple image styles representing at least one partial face region; a reference model creation submodule is used to create the person according to the standard three-dimensional face model for each of the face image samples A reference face model corresponding to the face image sample, the reference face model including the reference bone parameters corresponding to the face image sample; With reference to the face model, the reference model database is determined.
- the reference model database includes a correspondence relationship between the key point feature representing each of the image styles of each of the partial face regions and the reference bone parameters.
- the reference model creation sub-module includes: an image preprocessing unit, which is used to normalize a sample of the face image to obtain the head of the standard face image The pre-processed face image whose posture and image size are consistent, wherein the standard face image is a two-dimensional face image corresponding to the standard three-dimensional face model; the key point detection unit is used to pre-process the human face Perform key point detection on the face image to obtain a reference key point set of the face image sample, where the reference key point set includes a reference key point combination that characterizes each of the partial face regions on the face image sample;
- the model creation unit is configured to adjust the corresponding bone parameters in the standard three-dimensional face model based on each of the reference key point combinations to create the reference face model corresponding to the face image sample.
- the key point detection module includes: a key point positioning sub-module, configured to perform key point detection on the current face image to obtain the position coordinates of a preset number of key points;
- the point feature determination sub-module is configured to determine a key point feature representing at least one partial face area on the current face image according to the position coordinates of the preset number of key points.
- the key point feature determination sub-module includes: a coordinate combination determining unit, configured to determine the first part of the current image based on the position coordinates of the preset number of key points The key point coordinate combination of the face area is used as the key point feature that characterizes the first partial face area, wherein the first partial face area is any one of the at least one partial face area; and/or , A characteristic curve determination unit, configured to fit a characteristic curve that characterizes the first partial face area as the key to characterize the first partial face area according to a combination of key point coordinates that characterize the first partial face area Point features.
- the parameter matching module includes: a feature matching sub-module, configured to determine, for each partial face region in the current face image, the reference model database and the The reference key point feature matched with the key point feature of the local face area is used as the target reference key point feature of the local face area; the skeleton parameter determination sub-module is used to determine the sub-module according to the current face image.
- the reference bone parameter corresponding to the target reference key point feature of the local face area is used to determine the target bone parameter of the current face image.
- the feature matching sub-module includes: a similarity determination unit, configured to determine the key point feature of the partial face region and the corresponding reference key point in the reference model database Similarity between features; a target feature determining unit, configured to determine the reference key point feature with the highest similarity as the target reference key point feature of the local face area.
- the similarity determination unit includes: a curve fitting subunit, configured to fit a combination of key point coordinates of the local face region to characterize the local face The feature curve of the region; the similarity determination subunit is used to determine the key point feature of the local face region and the reference according to the distance between the feature curve and the corresponding reference feature curve in the reference model database The similarity between the corresponding reference key point features in the model database.
- the similarity determination unit includes: a local similarity determination subunit, configured to target the local face area when the local face area includes at least two sub-areas For each of the subregions in the face region, for each face image sample in the reference model database, determine the key point feature of the subregion and the corresponding subregion of the face image sample in the reference model database The similarity between the reference key point features of the region is used to obtain the local similarity corresponding to the sub-region; the overall similarity determination subunit is used for each face image sample in the reference model database, according to each The local similarity corresponding to the subregion is determined, and the overall similarity between the local face region and the corresponding local face region in the face image sample is determined as the key point feature of the local face region and The similarity between the corresponding reference key point features of the face image sample in the reference model database.
- a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the method according to any one of the first aspects is implemented .
- an electronic device including a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the program
- the computer system automatically obtains target bone parameters corresponding to the face image based on the key point features that characterize the local face area, and automatically compares the standard three-dimensional human body parameters according to the target bone parameters.
- the face model adjusts the bone parameters, and can automatically create a virtual three-dimensional face model that fits the current face image. In the entire model creation process, users do not need to manually adjust complex bone parameters according to their own subjective judgments, which reduces the difficulty of user operations.
- the computer system may pre-configure the reference model database, and then quickly match the target bone parameters corresponding to the face image from the reference model database.
- the regularity of the local area features of the face makes the data volume of the reference model database small, so that the computer system can quickly match the target bone parameters from the reference model database according to the key point features of the current face image, and then can use The target bone parameters efficiently and relatively accurately create a virtual three-dimensional face model matching the current face image.
- Fig. 1 is a flowchart of a method for creating a face model according to an exemplary embodiment of the present application.
- Fig. 2 is a schematic diagram showing an application scenario for creating a face model according to an exemplary embodiment of the present application.
- Figures 3-1 and 3-2 are schematic diagrams of application scenarios for creating a face model according to another exemplary embodiment of the present application.
- Fig. 4 is a flowchart showing a method for creating a face model according to another exemplary embodiment of the present application.
- Fig. 5 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
- Fig. 6 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
- Fig. 7 is a schematic diagram showing an application scenario for creating a face model according to another exemplary embodiment of the present application.
- Fig. 8 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
- Fig. 9-1, Fig. 9-2 and Fig. 9-3 are schematic diagrams of application scenarios for creating a face model according to another exemplary embodiment of the present application.
- Fig. 10 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
- Fig. 11 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
- Fig. 12 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
- Fig. 13 is a flowchart of a method for creating a face model according to another exemplary embodiment of the present application.
- Fig. 14 is a block diagram showing an apparatus for creating a face model according to an exemplary embodiment of the present application.
- Fig. 15 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 16 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 17 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 18 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 19 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 20 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 21 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 22 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 23 is a block diagram showing an apparatus for creating a face model according to another exemplary embodiment of the present application.
- Fig. 24 is a schematic structural diagram of an electronic device according to another exemplary embodiment of the present application.
- first, second, third, etc. may be used in this application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
- first information may also be referred to as second information, and similarly, the second information may also be referred to as first information.
- word “if” as used herein can be interpreted as "when” or “when” or "in response to determination”.
- the "virtual character” has evolved from a single virtual image to a character designed by the player, thereby allowing the creation of a more individual character image.
- a method for creating a three-dimensional face model of a virtual character based on a virtual bone control method may include a computer system, and may also include a camera to collect facial images.
- the above-mentioned computer system may be installed in a server, a server cluster or a cloud platform, or may be an electronic device such as a personal computer or a mobile terminal.
- the above-mentioned mobile terminal may specifically be an electronic device such as a smart phone, a PDA (Personal Digital Assistant, personal digital assistant), a tablet computer, and a game console.
- the camera and the computer system are independent and at the same time are connected to each other to jointly implement the method for creating a face model provided in the embodiment of the present application.
- the method may include:
- Step 110 Perform key point detection on the current face image to obtain at least one key point feature of the current face image.
- each of the key point features can represent one or more partial face regions on the current face image.
- the game application interface may provide a user operation entry.
- the game player can input a face image through the user operation portal, expecting that the background program of the computer system can create a corresponding virtual three-dimensional face model based on the face image.
- the computer system can create a virtual three-dimensional face model based on the face image input by the game player through the face pinching function, so as to meet the game player's individual needs for the game character.
- the aforementioned current face image may be taken by a game player, or may be selected by the game player from a picture database.
- the aforementioned current face image may be an image taken for a person in the real world, or a virtual person portrait designed manually or using drawing software.
- the embodiment of the present application does not limit the current way of acquiring the face image and the real existence of the person in the image in the real world.
- the computer system may first perform normalization processing on the current face image to obtain a face area image with a preset head posture and a preset image size.
- a pre-trained neural network is used to perform processing such as face detection, face posture correction, image scaling, etc., to obtain a face image with a preset image size and conforming to the preset head posture.
- the computer system can use any face key point detection method well known to those skilled in the relevant art to perform key point detection on the aforementioned preprocessed face region image to obtain key point features of the current face image.
- the key point feature of the current face image may include the position coordinate information of the key point, and may also include a characteristic curve that is fitted according to the position coordinate information of multiple key points and represents a local face area, such as eyelids. Contour lines such as lines and lip lines.
- Step 120 Obtain target bone parameters matching the current face image according to the key point feature.
- step 120 may specifically include obtaining target bone parameters matching the current face image from a reference model database according to the key point feature.
- the reference model database includes reference key point features determined from a preset number of face image samples and reference bone parameters corresponding to each of the reference key point features.
- each part in view of the strong regularity of the facial features, each part can be characterized by a limited image pattern.
- a limited number of eye shapes can express the eye characteristics of most people; a limited eyebrow style image can characterize the eyebrow characteristics of most people.
- a limited eyebrow style image can characterize the eyebrow characteristics of most people.
- the use of twelve eyebrow shapes can cover the eyebrow features of most faces.
- the computer system may determine the reference model database according to a certain number of face image samples in advance.
- the reference model database includes the reference key point feature determined from the face image sample and the reference bone parameter corresponding to each of the reference key point feature, and the reference bone parameter may be used to generate (render) the face image sample The reference face model.
- the computer system After the computer system obtains the key point features of the current face image, it can find the reference key point feature most similar to the key point feature as the target reference key point feature, and then obtain the reference key point with the above target from the reference model database
- the reference bone parameter corresponding to the feature is used as the target bone parameter adapted to the current face image.
- the reference model database may include a reference bone parameter representing a reference face model used to generate a face image sample, and the correspondence between the reference key point feature acquired from the face image sample and the reference bone parameter .
- the key point feature that characterizes the preset local face area can be used to match the reference model database, and the reference bone parameter corresponding to the key point feature can be obtained from it as the local face image.
- the target bone parameters of the face area can be obtained, such as the target bone parameters of the eyes, mouth, eyebrows, nose, facial contour, etc., so as to obtain the target bone parameters that fit the current face image.
- a set of target bone parameters can be obtained, such as the target bone parameters of the eyes, mouth, eyebrows, nose, facial contour, etc.
- Step 130 Create a virtual three-dimensional face model corresponding to the current face image according to the target bone parameters and the standard three-dimensional face model.
- the computer system can adjust the parameters of the bones in the standard 3D face model according to the above target bone parameters to generate a virtual 3D face model reflecting the facial features of the current face image .
- the virtual three-dimensional face model may be a virtual three-dimensional face model close to the facial features of an actual person, or it may be a cartoonized virtual three-dimensional face model reflecting the demeanor of the person.
- the embodiments of the present application do not limit that the finally output three-dimensional face model must be close to the facial features of real-world characters.
- FIG. 3-1 for a schematic diagram of a standard three-dimensional face model according to an exemplary embodiment.
- the standard three-dimensional face model belongs to a cartoonized virtual three-dimensional face model.
- Fig. 3-2 shows a skeleton diagram of the above-mentioned standard three-dimensional face model.
- the entire model consists of a preset number of bone structures, such as 61 bones.
- the line between every two points in Figure 3-2 represents a bone.
- Each part involves one or more bones.
- the nose part involves 3 bones. By adjusting the parameters of the 3 bones, different types of 3D nose models can be generated.
- the computer system automatically obtains the target bone parameters corresponding to the face image based on the key point features that characterize the local face area, and automatically adjusts the bone parameters of the standard three-dimensional face model according to the target bone parameters.
- users do not need to manually adjust complex bone parameters according to their own subjective judgments, which reduces the difficulty of user operations.
- the computer system may pre-configure the reference model database, and then quickly match the target bone parameters corresponding to the face image from the reference model database.
- the regularity of the local area features of the face makes the data volume of the reference model database not large, so that the computer system can quickly match the target bone parameters from the reference model database according to the key point features of the current face image, and then can use this
- the target bone parameters efficiently and relatively accurately create a virtual three-dimensional face model matching the current face image, which has good generalization.
- the above method further includes creating a reference model database.
- the method for creating a face model may further include step 100 to determine a reference model database according to a preset number of face image samples and a standard three-dimensional face model.
- the reference model database includes at least one reference key point feature determined from a preset number of face image samples and reference bone parameters corresponding to each of the at least one reference key point feature.
- step 120 may include: obtaining target bone parameters matching the current face image from the reference model library database according to the at least one key point feature.
- a preset number of face image samples can be obtained, and the image styles of various parts in the aforementioned face image samples can be manually marked. Then, based on the aforementioned annotated face image sample and standard three-dimensional face model, a corresponding virtual three-dimensional face model is generated through a bone control method.
- the virtual three-dimensional face model generated based on the face image sample and the standard three-dimensional face model is referred to as the reference face model.
- the computer system will correspondingly generate 201 reference face models, and generate the aforementioned reference model database based on the relevant data of the 201 reference face models.
- the execution body of creating the reference model database and the execution body of subsequently applying the reference model database to create the face model need not be the same computer system.
- the execution body of creating the reference model database may be a cloud computer system, such as a cloud server, and the execution body of the above steps 110 to 130 may be a computer system as a terminal device.
- the determination process of the reference model database and the subsequent face model creation process can be both The computer system of the terminal equipment executes.
- step 100 may include:
- Step 101 Obtain a face image sample set containing a preset number of face image samples.
- the face image sample set includes multiple image patterns that characterize at least one partial face area.
- the face image sample set may include a certain number of face image samples.
- the above-mentioned certain number of face image samples contains as comprehensively as possible the different image styles of various face parts such as forehead, eyes, nose, lips, etc., to ensure that the corresponding generated reference model database includes as much as possible Comprehensive reference data, such as reference key point features, reference bone parameters, etc.
- the following conditions can be met: for a randomly collected two-dimensional face image A, from the image patterns of each face part contained in the above face image sample set, you can Find the image styles corresponding to the different partial face regions in the above image A; in other words, according to the selective extraction of the image styles of different partial face regions such as facial features from the above-mentioned certain number of face image samples, you can roughly piece together Image A similar face image.
- a certain number of face image samples can be collected according to common facial features in the real world to obtain a face image sample set.
- 201 face image samples are collected to determine the above-mentioned face image sample set.
- the 201 face image samples may contain multiple image patterns for each partial face area.
- the partial face area refers to the eyebrow area, eye area, nose area, facial contour and other areas recognized from the two-dimensional face image.
- the 201 face image sample includes 12 eyebrow shapes corresponding to the eyebrow region shown in FIG. 2 above.
- the 201 facial image samples include multiple image patterns corresponding to partial facial regions such as mouth, eyes, nose, and facial contours.
- Step 102 Create a reference three-dimensional face model corresponding to each face image sample according to the standard three-dimensional face model.
- the virtual three-dimensional face model created for each face image sample is referred to as a reference three-dimensional face model.
- Each reference 3D face model corresponds to a set of bone control parameters.
- the above step 102 may include:
- Step 1021 Perform normalization processing on the face image sample to obtain a preprocessed face image that conforms to the head posture and image size of the standard face image.
- the standard face image is a two-dimensional face image corresponding to the standard three-dimensional face model.
- standardized processing such as face area detection, head posture correction, image scaling, etc. can be performed for each face image sample to obtain the head posture and image size of the standard face image.
- Compliant preprocessing face images Compared with the standard face image, the pre-processed face image can be understood as a face image separately collected by the same camera using the same image acquisition parameters for two people with the same object distance and the same head posture.
- the standard face image can be understood as the projection image of the standard three-dimensional face model in the preset image coordinate system.
- the standard three-dimensional face model is a virtual model created by the computer system after the key point detection of the standard face image, based on the obtained key point collection, such as 240 key point position coordinates, and a preset number of bones, such as 61 bones.
- Three-dimensional face model is a virtual model created by the computer system after the key point detection of the standard face image, based on the obtained key point collection, such as 240 key point position coordinates, and a preset number of bones, such as 61 bones.
- Step 1022 Perform key point detection on the preprocessed face image to obtain a reference key point set of the face image sample.
- the reference key point set includes references that characterize each partial face area on the face image sample Key point combination.
- any key point detection method well known to those skilled in the art can be used to extract a preset number of key points from the preprocessed face image, such as 68 key points, 106 key points, or 240 key points.
- key point detection In the process of key point detection on the face region image, preset algorithms such as edge detection robert algorithm, Sobel Sobel algorithm, etc. can be used; key point detection can also be performed through related models such as active contour snake model.
- the key point location can be performed by a neural network used for face key point detection. It is also possible to perform face key point detection through third-party applications.
- the third-party toolkit Dlib is used to perform face key point location, and 68 face key points are detected, as shown in Figure 7.
- 240 face key point positioning technology can also be used to locate the position coordinates of 240 key points, so as to realize eyebrows, eyes, nose, lips, facial contours, facial expressions in the current face image and/or face image sample. Positioning of detailed features such as key parts.
- the sequence number of each reference key point can be determined according to preset rules, and the reference key point combination that characterizes each partial face area can be determined. For example, in the example shown in Fig. 7, 68 key points are extracted from the face image sample; the reference key point combination composed of 18 to 22 reference key points represents the left eyebrow area. By analogy, different key point combinations are used to characterize different partial face regions.
- the information of each key point includes the serial number and coordinate position.
- the number and quantity of key points representing the same partial face area are the same, but the coordinate positions of the key points are different.
- the combination of key points 18-22 extracted from the standard face image also represents the left eyebrow area in the standard face image, but the coordinate position of each key point is the same as the key point 18-22 in the example shown in Figure 7. The coordinate positions of the points are different.
- the coordinate position of the key point refers to the position of the key point in the XOY coordinate system shown in FIG. 7 in the preset image coordinate system. Since the size of the pre-processed face images is the same, the same image coordinate system can be used for each pre-processed face image to represent the position coordinates of key points in different pre-processed face images to facilitate subsequent distance calculations.
- Step 1023 Adjust the corresponding bone parameters in the standard three-dimensional face model based on each of the reference key point combinations, and create a reference face model corresponding to the face image sample.
- the reference face model corresponding to the face image sample includes the reference bone parameters corresponding to the face image sample.
- the reference bone parameter may represent the reference face model used to render the face image sample.
- the system presets a mapping relationship between key point combinations and bones, and the mapping relationship may indicate which bone parameters need to be adjusted when generating the local face region represented by the key point combination in the corresponding three-dimensional face model .
- the nose region in the standard three-dimensional face model involves three bones, which can be expressed as G1 to G3, it can be determined that the three-dimensional nose model and the face image sample generated by adjusting the parameters of the three bones When the shape of the nose is approaching, the three-dimensional model of the nose is determined to be created.
- the bone control parameters of the current three bones are reference bone parameters corresponding to the image style of the nose in the face image sample.
- the creation of the reference face model is completed when the generated virtual three-dimensional face model meets the expectations of the user by adjusting the bone parameters of each partial face area.
- the reference bone parameters corresponding to each reference key point combination in the current face image sample can be determined, that is, the reference bone parameters corresponding to the image style of each partial face region in the face image sample, and the current face is obtained.
- the reference face model data may include the correspondence between the reference key point combination of each partial face region and the reference bone parameters.
- a virtual three-dimensional face model that is, a reference face model
- a reference face model for a face image sample
- it can be obtained according to the correspondence between the reference key point combinations that characterize each local face region and the reference bone parameters
- the reference face model data corresponding to a face image sample.
- the above steps 1021 to 1023 describe the process of creating a corresponding reference face model based on a face image sample.
- Step 103 Determine a reference model database according to the reference face model corresponding to each face image sample.
- the reference model database includes the correspondence between the reference key point features and the reference bone parameters that characterize each image style of each partial face region.
- the reference face model corresponding to each face image sample can be created according to the method shown in FIG. 6, and then the reference face model data corresponding to each face image sample can be determined.
- the reference model database may include the correspondence between the reference key point combination representing the image style of each partial face region and the reference bone parameters, the reference key point feature data of each face image sample, and each reference person Reference bone parameters of the face model.
- the bones have a parent-child bone relationship. When the parent bone moves, it will drive the child bones to move, and the bone movement similar to the wrist will drive the bone movement of the palm.
- the bone parameter adjustment of a partial face region may be related to the adjustment parameters of other bones in the entire face model. Therefore, in the embodiment of the present application, in the reference model database, a set of reference bone parameters corresponding to the entire reference face model is used for data storage.
- the computer system detects the key points of the current input face image, and after obtaining the key point features, it will automatically retrieve the reference model database according to the key point features of the current face image, and match different parts from the reference model database.
- the target bone parameters of the face area are the target bone parameters of the face area.
- the foregoing step 110 may include:
- Step 1101 Perform key point detection on the current face image to obtain position coordinates of a preset number of key points.
- the computer system can perform normalization processing on the current face image, including face area detection, head posture correction, image scaling, etc., to obtain the same preprocessing as the standard face image size image.
- the face key point positioning technology can be used to detect key points on the preprocessed image.
- 240 face key point positioning technology can be used to perform key point detection on the preprocessed image to obtain the position coordinates of 240 face key points.
- Step 1102 Determine a key point feature representing at least one partial face region on the current face image according to the position coordinates of the preset number of key points.
- the key point features representing at least one partial face area in the current face image can be determined.
- a partial face area such as the eyebrow area
- its key point features can include at least two representations as follows:
- the first way is to use the position coordinate combination of the key points to express the key point features of the local face area.
- the key point coordinates that characterize a local face area can be combined as the key point feature of the local face area.
- the coordinate position combination of the key points with serial numbers 18-22 is determined as the key point feature of the left eyebrow area.
- relatively fixed key points including the number of key points and the serial number of each key point
- the coordinate positions of the key points of the same serial number in the image coordinate system are different.
- the coordinate position of the key point 18 is (80, 15), that is, the position of the pixel in the 80th row and the 15th column.
- the coordinate position of the key point 18 may be (100, 20), that is, the position of the pixel in the 100th row and 20th column. Therefore, the position coordinates of the key point combination can be used to effectively distinguish the facial features of different people.
- the second method is to use the fitting curve of the key point coordinate combination to express the key point features of the local face area.
- a feature curve that characterizes the local face area can be fitted according to a combination of key point coordinates that characterize a local face area, and the feature curve can be used as the key point feature of the local face area.
- the characteristic curve fitted according to the coordinate positions of the key points with serial numbers 18-22 is used as the key point feature of the left eyebrow area.
- the eyelid characteristic curve is fitted according to the position coordinates of the key points 1-12 of the eye as the key point characteristic of the left eye.
- the shape of the curve fitted according to the position coordinates of the key points is also different. Therefore, the aforementioned characteristic curve can be used as a key point feature that characterizes a partial face region in the current face image to distinguish faces of different people.
- the target bone parameters matching the current face image can be searched from the reference model database based on the similarity between the key point features.
- the foregoing step 120 may include:
- Step 121 For each partial face area in the current face image, determine a reference key point feature matching the key point feature of the partial face area in the reference model database as the target reference key point feature of the partial face area.
- the above step 121 may include:
- Step 1211 Determine the similarity between the key point feature of the local face area and the corresponding reference key point feature in the reference model database.
- the corresponding reference key point feature in the reference model database may be a reference key point feature corresponding to the position of the partial face region in the reference model database.
- the Euclidean distance between the key point coordinate combinations can be used to determine the difference between the key point feature of the local face area and the reference key point feature The similarity between.
- the Euclidean distances between the position coordinates of the key points 18-22 in the current face image and the position coordinates of the key points 18-22 in any face image sample can be calculated respectively, denoted as l 18 , l 19 , l 20 , l 21 , l 22 , where l 18 represents the Euclidean distance between the position coordinates of the key point 18 of the current face image and the position coordinates of the key point 18 in the face image sample, and so on analogy.
- the similarity of the left eyebrow region in the two images can represent the sum L of the Euclidean distance of key points 18-22.
- L can be expressed as:
- L l 18 +l 19 +l 20 +l 21 +l 22 .
- the aforementioned similarity can also be expressed as a weighted value of the Euclidean distance between key points.
- preset weights can be set for each key point according to actual application scenarios. For example, the weights set for key points 18-22 are ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 , ⁇ 5 , then L can Expressed as:
- L ⁇ 1 *l 18 + ⁇ 2 *l 19 + ⁇ 3 *l 20 + ⁇ 4 *l 21 + ⁇ 5 *l 22 .
- the foregoing step 1211 may include:
- Step 12111 According to the combination of key point coordinates of the local face area, a characteristic curve characterizing the local face area is fitted.
- a characteristic curve can be fitted according to preset rules, such as from left to right, such as As shown in Figure 9-2.
- Step 12112 Determine the similarity between the key point feature of the partial face area and the corresponding reference key point feature in the reference model database according to the distance between the feature curve and the corresponding reference feature curve in the reference model database.
- the Frechet distance value can be used to measure the similarity between key point features.
- a combination of the two can also be used to determine the target reference key point feature.
- the Euclidean distance between the key point coordinate combination and each corresponding reference key point coordinate combination in the reference model database can be calculated respectively. If there are at least two reference key point coordinate combinations in the reference model database that are the same as the Euclidean distance value of the key point coordinate combination, further calculating each of the reference key points in the at least two reference key point coordinate combinations Combining the frechet distance value between the key point coordinate combination and the key point coordinate combination, thereby effectively identifying the target reference key point feature that is closest to the characteristic curve shape of the key point coordinate combination in the current face image.
- a corresponding strategy may be used to determine the similarity between the key point feature of the local face area and the corresponding reference key point feature in the reference model database according to the distribution characteristics of the local face area. degree.
- the partial face area includes at least two sub-regions
- the partial face area can be characterized by the key point features of the at least two sub-regions, referring to FIG. 13, in determining the key point features of the partial face area
- the above step 1211 may include:
- Step 1211-1 For each sub-region in the partial face region, determine the similarity between the key point feature of the sub-region and the reference key point feature of the corresponding sub-region of the face image sample in the reference model database. Obtain the local similarity corresponding to the sub-region.
- the corresponding sub-region of the face image sample refers to the sub-region in the face image sample corresponding to the position of the sub-region currently being processed in the partial face region.
- the key point features of the eye area include key point features corresponding to the left eye area and the right eye area respectively.
- the key point features of the eyebrow area include the key point features corresponding to the left eyebrow area and the right eyebrow area respectively.
- the key point features of the mouth area include the key point features corresponding to the upper lip area and the lower lip area respectively.
- the similarity between the key point feature of the left eye area and each left eye reference key point feature in the reference model database according to any of the above determination methods, and the key point feature of the right eye area and the reference model database The similarity between each right eye reference key point feature in the middle.
- the similarity between the key point feature of a sub-region and the reference key point feature of the corresponding sub-region in the reference model database is called the local similarity.
- Step 1211-2 Determine the overall similarity between the local face area and the corresponding local face area of the face image sample according to the local similarity corresponding to each sub-region, as the key point of the local face area The similarity between the feature and the corresponding reference key point feature of the face image sample in the reference model database.
- the two local similarities can be summed or weighted summation , As the similarity between the current face image and the corresponding area of a face image sample.
- the overall similarity of the above-mentioned eyes, eyebrows, mouth, etc., of the partial face area can be more accurately compared based on the local similarity of the multiple sub-regions, and further
- the target bone parameters of the partial face region are accurately determined from the reference model database.
- Step 1212 Determine the reference key point feature with the highest similarity as the target reference key point feature of the partial face area.
- step 1211 calculate the similarity between the key point feature of the local face area in the current face image and the corresponding reference key point feature of each face image sample in the reference model database, and The reference key point feature with the highest similarity is determined as the target reference key point feature of the partial face region in the current face image.
- Step 122 Determine the target bone parameter of the current face image according to the reference bone parameter corresponding to the target reference key point feature of each partial face region in the current face image.
- the reference model database stores the reference key point features corresponding to each image style of each partial face area, and the corresponding relationship between the reference key point features and the reference bone parameters.
- M partial face regions can be obtained from each face image division, for example, 5 partial face regions, which are eyebrows, eyes, nose, mouth, and face contours.
- Each reference key point feature corresponds to a set of reference bone parameters. Therefore, at least N groups of reference bone parameters corresponding to the N reference key point features are stored in the reference model database.
- the aforementioned reference key point feature is a reference feature curve fitted by the coordinate position of the reference key point
- at least N groups of reference bone parameters corresponding to the N reference feature curves are stored in the reference model database.
- the aforementioned reference key point feature is the coordinate position combination of the reference key point, that is, the reference key point coordinate combination
- at least N sets of reference bone parameters corresponding to the N reference key point coordinate combinations are stored in the reference model database.
- the computer system After the computer system obtains the key point features of the local face area from the current face image, it can determine that the reference model database matches the key point feature, such as the most similar reference key point feature as the target reference key point feature, and then according to the target Refer to the reference bone parameter corresponding to the key point feature to determine the target bone parameter suitable for the above key point feature.
- the reference bone parameter corresponding to the target reference key point feature is directly determined as the target bone parameter applicable to the local face region corresponding to the key point feature in the current face image.
- the Euclidean distance or the frechet distance is used to measure the similarity between the key point feature of the local face region in the current face image and the corresponding reference key point feature in the reference model database by combining the expression mode of the key point feature It can relatively accurately and quickly determine the target bone parameters of the partial face area in the current face image based on the reference model database, and determine the bone adjustment parameters of the face model to be created, which can effectively improve the preset application scenes such as game scenes
- the present disclosure also provides embodiments of application function realization devices and corresponding terminals.
- an embodiment of the present application provides an apparatus for creating a face model.
- the apparatus may include: a key point detection module 21, configured to perform key point detection on the current face image to obtain the current At least one key point feature of the face image; the parameter matching module 22 is used to obtain the target bone parameters matching the current face image according to the at least one key point feature; the model creation module 23 is used to obtain the target bone parameters according to the target The bone parameters and the standard three-dimensional face model are used to create a virtual three-dimensional face model corresponding to the current face image.
- the virtual three-dimensional face model corresponding to the current face image output by the model creation module 23 may be a cartoonized virtual three-dimensional face model corresponding to the current face image.
- the virtual three-dimensional face model corresponding to the current face image output by the model creation module 23 may also be a virtual three-dimensional face model similar to the actual face in the current face image.
- the face model is a virtual three-dimensional face model that is realistic with the real face.
- the device may further include: a database creation module 20, configured to determine all face image samples based on a preset number of face image samples and the standard three-dimensional face model.
- the reference model database includes at least one reference key point feature determined from a preset number of face image samples and reference bone parameters corresponding to each of the at least one reference key point feature.
- the parameter matching module 22 is specifically configured to obtain target bone parameters matching the current face image from the reference model database according to the at least one key point feature.
- the database creation module 20 may include: a sample acquisition sub-module 201, configured to acquire a person containing the preset number of face image samples A face image sample set, the face image sample set includes multiple image styles representing at least one partial face region; a reference model creation sub-module 202 is used for each face image sample according to the The standard three-dimensional face model creates a reference face model corresponding to the face image sample, and the reference face model includes the reference bone parameters corresponding to the face image sample; the database determination sub-module 203 is used for The reference face model corresponding to each of the face image samples is determined to determine the reference model database.
- the reference model database includes a correspondence relationship between the key point feature representing each of the image styles of each of the partial face regions and the reference bone parameters.
- the reference model creation sub-module 202 may include: an image preprocessing unit 2021, configured to perform normalization processing on a face image sample to obtain A preprocessed face image conforming to the head posture and image size of a standard face image, wherein the standard face image is a two-dimensional face image corresponding to the standard three-dimensional face model; a key point detection unit 2022 , For performing key point detection on the preprocessed face image to obtain a reference key point set of the face image sample, the reference key point set including characterizing each of the local people on the face image sample Reference key point combination of the face region; a reference model creation unit 2023, configured to adjust the corresponding bone parameters in the standard three-dimensional face model based on each of the reference key point combinations, and create the corresponding face image sample The reference face model.
- an image preprocessing unit 2021 configured to perform normalization processing on a face image sample to obtain A preprocessed face image conforming to the head posture and image size of a standard face image, wherein the standard face image is a two
- the key point detection module 21 may include: a key point positioning sub-module 211 for keying the current face image Point detection to obtain the position coordinates of a preset number of key points; the key point feature determination sub-module 212 is configured to determine at least one partial face area on the current face image according to the position coordinates of the preset number of key points The key point features.
- the key point feature may include a combination of key point coordinates, and/or a characteristic curve.
- the key point feature determining sub-module 212 may include: a coordinate combination determining unit 2121, configured to determine the position of the key point based on the preset number Coordinates, determining a combination of key point coordinates characterizing the first partial face area on the current image as the key point feature characterizing the first partial face area, wherein the first partial face area is located at the at least one Any one of the local face regions; the characteristic curve determining unit 2122 is configured to fit a characteristic curve that characterizes the first local face region according to the combination of key point coordinates that characterize the first local face region as the first local face region.
- the key point feature of a partial face area configured to determine the position of the key point based on the preset number Coordinates, determining a combination of key point coordinates characterizing the first partial face area on the current image as the key point feature characterizing the first partial face area, wherein the first partial face area is located at the at least one Any one of the local face regions.
- the characteristic curve determining unit 2122 is configured to fit a
- the at least one partial face region includes at least one of the following: eyebrows, eyes, nose, mouth, and facial contours.
- the device embodiment shown in FIG. 19 corresponds to the case where the key point feature determination submodule 212 includes a coordinate combination determination unit 2121 and a characteristic curve determination unit 2122.
- the key point feature determining sub-module 212 may include a coordinate combination determining unit 2121 or a characteristic curve determining unit 2122.
- the parameter matching module 22 may include: a feature matching sub-module 221, configured to target each part of the current face image The face area, determining the reference key point feature matching the key point feature of the local face area in the reference model database, as the target reference key point feature of the local face area; a skeleton parameter determination submodule 222, It is used to determine the target bone parameter of the current face image according to the reference bone parameter corresponding to the target reference key point feature of each of the partial face regions in the current face image.
- the feature matching submodule 221 may include: a similarity determination unit 2211, configured to determine the key point feature of the partial face region and the reference The similarity between the corresponding reference key point features in the model database; the target feature determining unit 2212 is configured to determine the reference key point feature with the highest similarity as the target reference key point feature of the local face area .
- the key point feature may be a characteristic curve fitted according to the position coordinates of the key point.
- the similarity determination unit 2211 may include: a curve fitting subunit 2201, configured to combine the key point coordinates of a local face area , Fitting a characteristic curve that characterizes the partial face region; a similarity determination subunit 2202, configured to determine the partial facial area according to the distance between the characteristic curve and the corresponding reference characteristic curve in the reference model database The similarity between the key point feature of the face area and the corresponding reference key point feature in the reference model database.
- the distance may include Euclidean distance or Frechet distance.
- the target feature determining unit 2212 may be used to determine the reference characteristic curve with the smallest distance value as the target reference key point feature.
- the similarity determination unit 2211 may include: a local similarity determination subunit 22111, configured to include at least two subregions in one partial face region In the case of, for each of the sub-regions in the partial face region, for each face image sample in the reference model database, determine the key point feature of the sub-region and the reference model database The similarity between the reference key point features of the corresponding sub-regions of the face image sample is used to obtain the local similarity corresponding to the sub-region; the overall similarity determination sub-unit 22112 is used for each reference model database.
- For a personal face image sample determine the overall similarity between the partial face area and the corresponding partial face area in the face image sample according to the local similarity corresponding to each of the sub-regions, as the The similarity between the key point feature of the local face area and the corresponding reference key point feature of the face image sample in the reference model database.
- the relevant part can refer to the part of the description of the method embodiment.
- the device embodiments described above are merely illustrative.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units.
- Some or all of the modules can be selected according to actual needs to achieve the objectives of the solution of this application. Those of ordinary skill in the art can understand and implement it without creative work.
- the embodiment of the present application also proposes a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
- the electronic device includes a processor 241, an internal bus 242, a network interface 243, a memory 245, and a non-volatile memory 246.
- the processor 241 reads the corresponding computer program from the non-volatile memory 246 to the memory 245 and then runs it to form an intelligent driving control device on a logical level.
- this application does not exclude other implementations, such as logic devices or a combination of software and hardware, etc. That is to say, the execution body of the following processing flow is not limited to each logic unit, and can also be Hardware or logic device.
- one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more embodiments of this specification may adopt a computer program implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. The form of the product.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the embodiment of this specification also provides a computer-readable storage medium, and the storage medium can store a computer program.
- the program When the program is executed by a processor, it realizes the creation of the face model provided by any one of the embodiments in FIGS. 1 to 13 of this specification. Steps of the method.
- the embodiments of the subject and functional operations described in this specification can be implemented in the following: digital electronic circuits, tangible computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or among them A combination of one or more.
- the embodiments of the subject matter described in this specification can be implemented as one or more computer programs, that is, one or one of the computer program instructions encoded on a tangible non-transitory program carrier to be executed by a data processing device or to control the operation of the data processing device Multiple modules.
- the program instructions may be encoded on artificially generated propagated signals, such as machine-generated electrical, optical or electromagnetic signals, which are generated to encode information and transmit it to a suitable receiver device for data
- the processing device executes.
- the computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
- the processing and logic flow described in this specification can be executed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating according to input data and generating output.
- the processing and logic flow can also be executed by a dedicated logic circuit, such as FPGA (Field Programmable Gate Array) or ASIC (Application Specific Integrated Circuit), and the device can also be implemented as a dedicated logic circuit.
- FPGA Field Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- Computers suitable for executing computer programs include, for example, general-purpose and/or special-purpose microprocessors, or any other type of central processing unit.
- the central processing unit will receive instructions and data from a read-only memory and/or random access memory.
- the basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data.
- a computer will also include one or more mass storage devices for storing data, such as magnetic disks, magneto-optical disks, or optical disks, or the computer will be operatively coupled to this mass storage device to receive data from or send data to it. It transmits data, or both.
- the computer does not have to have such equipment.
- the computer can be embedded in another device, such as a mobile phone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a global positioning system (GPS) receiver, or a universal serial bus (USB ) Portable storage devices with flash drives, to name a few.
- PDA personal digital assistant
- GPS global positioning system
- USB universal serial bus
- Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media, and memory devices, including, for example, semiconductor memory devices (such as EPROM, EEPROM, and flash memory devices), magnetic disks (such as internal hard disks or Removable disk), magneto-optical disk, CD ROM and DVD-ROM disk.
- semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
- magnetic disks such as internal hard disks or Removable disk
- magneto-optical disk CD ROM and DVD-ROM disk.
- the processor and the memory can be supplemented by or incorporated into a dedicated logic circuit.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Graphics (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (22)
- 一种创建脸部模型的方法,包括:对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征;根据所述至少一个关键点特征,获取与所述当前人脸图像匹配的目标骨骼参数;依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:根据预设数量的人脸图像样本和所述标准三维人脸模型,确定参考模型数据库,其中,所述参考模型数据库包括从预设数量的人脸图像样本确定的至少一个参考关键点特征以及所述至少一个参考关键点特征各自对应的参考骨骼参数;根据所述至少一个关键点特征,获取与所述当前人脸图像匹配的目标骨骼参数,包括:根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数。
- 根据权利要求2所述的方法,其特征在于,根据所述预设数量的人脸图像样本和所述标准三维人脸模型,确定所述参考模型数据库,包括:获取含有所述预设数量的人脸图像样本的人脸图像样本集合,所述人脸图像样本集合中包括表征至少一个局部人脸区域的多种图像样式;针对每个所述人脸图像样本,依据所述标准三维人脸模型创建所述人脸图像样本对应的参考人脸模型,所述参考人脸模型包括与所述人脸图像样本对应的所述参考骨骼参数;根据每个所述人脸图像样本对应的参考人脸模型,确定所述参考模型数据库,其中,所述参考模型数据库包括表征每一个所述局部人脸区域的每一种所述图像样式的所述参考关键点特征与所述参考骨骼参数之间的对应关系。
- 根据权利要求3所述的方法,其特征在于,依据所述标准三维人脸模型,创建所述人脸图像样本对应的所述参考人脸模型,包括:对所述人脸图像样本进行规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像,其中,所述标准人脸图像为与所述标准三维人脸模型对应的二维人脸图像;对所述预处理人脸图像进行关键点检测,获得所述人脸图像样本的参考关键点集合,所述参考关键点集合包括表征所述人脸图像样本上的各所述局部人脸区域的参考关键点组合;基于每个所述参考关键点组合对所述标准三维人脸模型中对应的骨骼参数进行调整,创建所述人脸图像样本对应的所述参考人脸模型。
- 根据权利要求1~4任一所述的方法,其特征在于,所述对所述当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征,包括:对所述当前人脸图像进行关键点检测,获得预设数量关键点的位置坐标;根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的关键点特征。
- 根据权利要求5所述的方法,其特征在于,根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的所述关键点特征,包括:基于所述预设数量关键点的位置坐标,确定表征所述当前图像上第一局部人脸区域的关键点坐标组合作为表征该第一局部人脸区域的所述关键点特征,其中,该第一局部人脸区域为所述至少一个局部人脸区域中的任一个;和/或,根据表征该第一局部人脸区域的所述关键点坐标组合,拟合出表征该第一局部人脸区域的特征曲线作为表征该第一局部人脸区域的所述关键点特征。
- 根据权利要求2~6任一所述的方法,其特征在于,根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数,包括:针对所述当前人脸图像中每个局部人脸区域,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征作为所述局部人脸区域的目标参考关键点特征;依据与所述当前人脸图像中每个所述局部人脸区域的目标参考关键点特征对应的参考骨骼参数,确定所述当前人脸图像的目标骨骼参数。
- 根据权利要求7所述的方法,其特征在于,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征作为所述局部人脸区域的目标参考关键点特征,包括:确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度;将相似度最高的所述参考关键点特征确定为所述局部人脸区域的所述目标参考关键点特征。
- 根据权利要求8所述的方法,其特征在于,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度,包括:根据所述局部人脸区域的关键点坐标组合,拟合出表征所述局部人脸区域的特征曲线;根据所述特征曲线与所述参考模型数据库中对应的参考特征曲线之间的距离,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度。
- 根据权利要求8或9所述的方法,其特征在于,在所述局部人脸区域包括至少两个子区域的情况下,确定所述局部人脸区域的关键点特征与所述参考模型数据库中一个人脸图像样本的对应参考关键点特征之间的相似度,包括:针对所述局部人脸区域中的每个所述子区域,确定所述子区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应子区域的参考关键点特征之间的相似度,获得所述子区域对应的局部相似度;根据每个所述子区域对应的所述局部相似度,确定所述局部人脸区域与该人脸图像样本中的对应局部人脸区域之间的整体相似度,作为所述局部人脸区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应参考关键点特征之间的相似度。
- 一种创建脸部模型的装置,包括:关键点检测模块,用于对当前人脸图像进行关键点检测,获得所述当前人脸图像的至少一个关键点特征;参数匹配模块,用于根据所述至少一个关键点特征获取与所述当前人脸图像匹配的目标骨骼参数;模型创建模块,用于依据所述目标骨骼参数和标准三维人脸模型,创建所述当前人脸图像对应的虚拟三维脸部模型。
- 根据权利要求11所述的装置,其特征在于,所述装置还包括数据库创建模块,用于根据预设数量的人脸图像样本和所述标准三维人脸模型,确定所述参考模型数据库,其中,所述参考模型数据库包括从预设数量的人脸图像样本确定的至少一个参考关键点特征以及所述至少一个参考关键点特征各自对应的参考骨骼参数;所述参数匹配模块具体用于:根据所述至少一个关键点特征,从所述参考模型数据库中获取与所述当前人脸图像匹配的目标骨骼参数。
- 根据权利要求12所述的装置,其特征在于,所述数据库创建模块,包括:样本获取子模块,用于获取含有所述预设数量的所述人脸图像样本的人脸图像样本集合,所述人脸图像样本集合中包括表征至少一种局部人脸区域的多种图像样式;参考模型创建子模块,用于针对每个所述人脸图像样本,依据所述标准三维人脸模型创建所述人脸图像样本对应的参考人脸模型,所述参考人脸模型包括与所述人脸图像样本对应的所述参考骨骼参数;数据库确定子模块,用于根据每个所述人脸图像样本对应的参考人脸模型,确定所述参考模型数据库,其中,所述参考模型数据库包括表征每一个所述局部人脸区域的每一种所述图像样式的所述关键点特征与所述参考骨骼参数之间的对应关系。
- 根据权利要求13所述的装置,其特征在于,所述参考模型创建子模块,包括:图像预处理单元,用于对一幅所述人脸图像样本进行规范化处理,获得与标准人脸图像的头部姿态和图像尺寸符合的预处理人脸图像,其中,所述标准人脸图像为与所述标准三维人脸模型对应的二维人脸图像;关键点检测单元,用于对所述预处理人脸图像进行关键点检测,获得所述人脸图像样本的参考关键点集合,所述参考关键点集合包括表征所述人脸图像样本上的各所述局部人脸区域的参考关键点组合;参考模型创建单元,用于基于每个所述参考关键点组合对所述标准三维人脸模型中对应的骨骼参数进行调整,创建所述人脸图像样本对应的所述参考人脸模型。
- 根据权利要求11~14任一所述的装置,其特征在于,所述关键点检测模块包括:关键点定位子模块,用于对所述当前人脸图像进行关键点检测,获得预设数量关键点的位置坐标;关键点特征确定子模块,用于根据所述预设数量关键点的位置坐标确定表征所述当前人脸图像上的至少一个局部人脸区域的关键点特征。
- 根据权利要求15所述的装置,其特征在于,所述关键点特征确定子模块包括:坐标组合确定单元,用于基于所述预设数量关键点的位置坐标,确定表征所述当前图像上第一局部人脸区域的关键点坐标组合作为表征该第一局部人脸区域的所述关键点特征,其中,该第一局部人脸区域位所述至少一个局部人脸区域中的任一个;和/或,特征曲线确定单元,用于根据表征该第一局部人脸区域的关键点坐标组合,拟合出表征该第一局部人脸区域的特征曲线作为表征该第一局部人脸区域的所述关键点特征。
- 根据权利要求12~16任一所述的装置,其特征在于,所述参数匹配模块,包括:特征匹配子模块,用于针对所述当前人脸图像中每个局部人脸区域,确定所述参考模型数据库中与所述局部人脸区域的关键点特征匹配的参考关键点特征,作为所述局部人脸区域的目标参考关键点特征;骨骼参数确定子模块,用于依据与所述当前人脸图像中每个所述局部人脸区域的目标参考关键点特征对应的参考骨骼参数,确定所述当前人脸图像的目标骨骼参数。
- 根据权利要求17所述的装置,其特征在于,所述特征匹配子模块,包括:相似度确定单元,用于确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度;目标特征确定单元,用于将相似度最高的所述参考关键点特征确定为所述局部人脸区域的所述目标参考关键点特征。
- 根据权利要求18所述的装置,其特征在于,所述相似度确定单元,包括:曲线拟合子单元,用于根据所述局部人脸区域的关键点坐标组合,拟合出表征所述局部人脸区域的特征曲线;相似度确定子单元,用于根据所述特征曲线与所述参考模型数据库中对应的参考特征曲线之间的距离,确定所述局部人脸区域的关键点特征与所述参考模型数据库中对应的参考关键点特征之间的相似度。
- 根据权利要求18或19所述的装置,其特征在于,所述相似度确定单元,包括:局部相似度确定子单元,用于在一个所述局部人脸区域包括至少两个子区域的情况下,针对所述局部人脸区域中的每个所述子区域,针对所述参考模型数据库中的每个人脸图像样本,确定所述子区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应子区域的参考关键点特征之间的相似度,获得所述子区域对应的局部相似度;整体相似度确定子单元,用于针对所述参考模型数据库中的每个人脸图像样本,根据每个所述子区域对应的所述局部相似度,确定所述局部人脸区域与该人脸图像样本中的对应局部人脸区域之间的整体相似度,作为所述局部人脸区域的关键点特征与所述参考模型数据库中该人脸图像样本的对应参考关键点特征之间的相似度。
- 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现上述权利要求1-10中任一项所述的方法。
- 一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现上述权利要求1-10中任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021516410A JP7191213B2 (ja) | 2019-05-15 | 2020-02-21 | 顔モデルの生成方法、装置、電子機器及びコンピュータ可読記憶媒体 |
KR1020217008646A KR102523512B1 (ko) | 2019-05-15 | 2020-02-21 | 얼굴 모델의 생성 |
SG11202103190VA SG11202103190VA (en) | 2019-05-15 | 2020-02-21 | Face model creation |
US17/212,523 US11836943B2 (en) | 2019-05-15 | 2021-03-25 | Virtual face model creation based on key point |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910403884.8 | 2019-05-15 | ||
CN201910403884.8A CN110111418B (zh) | 2019-05-15 | 2019-05-15 | 创建脸部模型的方法、装置及电子设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/212,523 Continuation US11836943B2 (en) | 2019-05-15 | 2021-03-25 | Virtual face model creation based on key point |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020228389A1 true WO2020228389A1 (zh) | 2020-11-19 |
Family
ID=67490284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/076134 WO2020228389A1 (zh) | 2019-05-15 | 2020-02-21 | 一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质 |
Country Status (7)
Country | Link |
---|---|
US (1) | US11836943B2 (zh) |
JP (1) | JP7191213B2 (zh) |
KR (1) | KR102523512B1 (zh) |
CN (1) | CN110111418B (zh) |
SG (1) | SG11202103190VA (zh) |
TW (1) | TW202044202A (zh) |
WO (1) | WO2020228389A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112973122A (zh) * | 2021-03-02 | 2021-06-18 | 网易(杭州)网络有限公司 | 游戏角色上妆方法、装置及电子设备 |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108615014B (zh) * | 2018-04-27 | 2022-06-21 | 京东方科技集团股份有限公司 | 一种眼睛状态的检测方法、装置、设备和介质 |
CN110111418B (zh) * | 2019-05-15 | 2022-02-25 | 北京市商汤科技开发有限公司 | 创建脸部模型的方法、装置及电子设备 |
CN110675475B (zh) * | 2019-08-19 | 2024-02-20 | 腾讯科技(深圳)有限公司 | 一种人脸模型生成方法、装置、设备及存储介质 |
CN110503700A (zh) * | 2019-08-26 | 2019-11-26 | 北京达佳互联信息技术有限公司 | 生成虚拟表情的方法、装置、电子设备及存储介质 |
CN110705448B (zh) * | 2019-09-27 | 2023-01-20 | 北京市商汤科技开发有限公司 | 一种人体检测方法及装置 |
CN110738157A (zh) * | 2019-10-10 | 2020-01-31 | 南京地平线机器人技术有限公司 | 虚拟面部的构建方法及装置 |
CN111354079B (zh) * | 2020-03-11 | 2023-05-02 | 腾讯科技(深圳)有限公司 | 三维人脸重建网络训练及虚拟人脸形象生成方法和装置 |
CN111640204B (zh) * | 2020-05-14 | 2024-03-19 | 广东小天才科技有限公司 | 三维对象模型的构建方法、构建装置、电子设备及介质 |
CN111738087B (zh) * | 2020-05-25 | 2023-07-25 | 完美世界(北京)软件科技发展有限公司 | 一种游戏角色面部模型的生成方法和装置 |
CN111652798B (zh) * | 2020-05-26 | 2023-09-29 | 浙江大华技术股份有限公司 | 人脸姿态迁移方法和计算机存储介质 |
CN111714885A (zh) * | 2020-06-22 | 2020-09-29 | 网易(杭州)网络有限公司 | 游戏角色模型生成、角色调整方法、装置、设备及介质 |
CN112232183B (zh) * | 2020-10-14 | 2023-04-28 | 抖音视界有限公司 | 虚拟佩戴物匹配方法、装置、电子设备和计算机可读介质 |
CN114727002A (zh) * | 2021-01-05 | 2022-07-08 | 北京小米移动软件有限公司 | 拍摄方法、装置、终端设备及存储介质 |
CN112767348B (zh) * | 2021-01-18 | 2023-11-24 | 上海明略人工智能(集团)有限公司 | 一种检测信息的确定方法和装置 |
CN112967364A (zh) * | 2021-02-09 | 2021-06-15 | 咪咕文化科技有限公司 | 一种图像处理方法、装置及设备 |
KR102334666B1 (ko) | 2021-05-20 | 2021-12-07 | 알레시오 주식회사 | 얼굴 이미지 생성 방법 |
CN114299595A (zh) * | 2022-01-29 | 2022-04-08 | 北京百度网讯科技有限公司 | 人脸识别方法、装置、设备、存储介质和程序产品 |
KR20230151821A (ko) | 2022-04-26 | 2023-11-02 | 주식회사 리본소프트 | 메타버스에 이용되는 3차원 미형 캐릭터 생성 시스템 및 방법 |
KR102494222B1 (ko) * | 2022-09-13 | 2023-02-06 | 주식회사 비브스튜디오스 | 자동 3d 눈썹 생성 방법 |
CN115393532B (zh) * | 2022-10-27 | 2023-03-14 | 科大讯飞股份有限公司 | 脸部绑定方法、装置、设备及存储介质 |
KR20240069548A (ko) | 2022-11-10 | 2024-05-20 | 알레시오 주식회사 | 이미지 변환 모델 제공 방법, 서버 및 컴퓨터 프로그램 |
CN115797569B (zh) * | 2023-01-31 | 2023-05-02 | 盾钰(上海)互联网科技有限公司 | 高精度数孪人面部表情动作细分的动态生成方法及系统 |
CN117152308B (zh) * | 2023-09-05 | 2024-03-22 | 江苏八点八智能科技有限公司 | 一种虚拟人动作表情优化方法与系统 |
CN117593493A (zh) * | 2023-09-27 | 2024-02-23 | 书行科技(北京)有限公司 | 三维脸部拟合方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080037836A1 (en) * | 2006-08-09 | 2008-02-14 | Arcsoft, Inc. | Method for driving virtual facial expressions by automatically detecting facial expressions of a face image |
CN107705365A (zh) * | 2017-09-08 | 2018-02-16 | 郭睿 | 可编辑的三维人体模型创建方法、装置、电子设备及计算机程序产品 |
CN109671016A (zh) * | 2018-12-25 | 2019-04-23 | 网易(杭州)网络有限公司 | 人脸模型的生成方法、装置、存储介质及终端 |
CN110111418A (zh) * | 2019-05-15 | 2019-08-09 | 北京市商汤科技开发有限公司 | 创建脸部模型的方法、装置及电子设备 |
CN110675475A (zh) * | 2019-08-19 | 2020-01-10 | 腾讯科技(深圳)有限公司 | 一种人脸模型生成方法、装置、设备及存储介质 |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7856125B2 (en) * | 2006-01-31 | 2010-12-21 | University Of Southern California | 3D face reconstruction from 2D images |
TWI427545B (zh) | 2009-11-16 | 2014-02-21 | Univ Nat Cheng Kung | 以尺度不變特徵轉換和人臉角度估測為基礎的人臉辨識方法 |
TWI443601B (zh) | 2009-12-16 | 2014-07-01 | Ind Tech Res Inst | 擬真臉部動畫系統及其方法 |
US8550818B2 (en) * | 2010-05-21 | 2013-10-08 | Photometria, Inc. | System and method for providing and modifying a personalized face chart |
JP2014199536A (ja) * | 2013-03-29 | 2014-10-23 | 株式会社コナミデジタルエンタテインメント | 顔モデル生成装置、顔モデル生成装置の制御方法、及びプログラム |
CN104715227B (zh) * | 2013-12-13 | 2020-04-03 | 北京三星通信技术研究有限公司 | 人脸关键点的定位方法和装置 |
KR102357340B1 (ko) * | 2014-09-05 | 2022-02-03 | 삼성전자주식회사 | 얼굴 인식 방법 및 장치 |
KR101997500B1 (ko) * | 2014-11-25 | 2019-07-08 | 삼성전자주식회사 | 개인화된 3d 얼굴 모델 생성 방법 및 장치 |
EP3335195A2 (en) * | 2015-08-14 | 2018-06-20 | Metail Limited | Methods of generating personalized 3d head models or 3d body models |
CN107025678A (zh) * | 2016-01-29 | 2017-08-08 | 掌赢信息科技(上海)有限公司 | 一种3d虚拟模型的驱动方法及装置 |
KR101757642B1 (ko) * | 2016-07-20 | 2017-07-13 | (주)레벨소프트 | 3d 얼굴 모델링 장치 및 방법 |
CN106652025B (zh) * | 2016-12-20 | 2019-10-01 | 五邑大学 | 一种基于视频流与人脸多属性匹配的三维人脸建模方法和打印装置 |
CN108960020A (zh) * | 2017-05-27 | 2018-12-07 | 富士通株式会社 | 信息处理方法和信息处理设备 |
US10796468B2 (en) * | 2018-02-26 | 2020-10-06 | Didimo, Inc. | Automatic rig creation process |
WO2019209431A1 (en) * | 2018-04-23 | 2019-10-31 | Magic Leap, Inc. | Avatar facial expression representation in multidimensional space |
US10706556B2 (en) * | 2018-05-09 | 2020-07-07 | Microsoft Technology Licensing, Llc | Skeleton-based supplementation for foreground image segmentation |
WO2019216593A1 (en) * | 2018-05-11 | 2019-11-14 | Samsung Electronics Co., Ltd. | Method and apparatus for pose processing |
CN109685892A (zh) * | 2018-12-31 | 2019-04-26 | 南京邮电大学盐城大数据研究院有限公司 | 一种快速3d人脸构建系统及构建方法 |
-
2019
- 2019-05-15 CN CN201910403884.8A patent/CN110111418B/zh active Active
-
2020
- 2020-02-21 WO PCT/CN2020/076134 patent/WO2020228389A1/zh active Application Filing
- 2020-02-21 SG SG11202103190VA patent/SG11202103190VA/en unknown
- 2020-02-21 KR KR1020217008646A patent/KR102523512B1/ko active IP Right Grant
- 2020-02-21 JP JP2021516410A patent/JP7191213B2/ja active Active
- 2020-04-30 TW TW109114456A patent/TW202044202A/zh unknown
-
2021
- 2021-03-25 US US17/212,523 patent/US11836943B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080037836A1 (en) * | 2006-08-09 | 2008-02-14 | Arcsoft, Inc. | Method for driving virtual facial expressions by automatically detecting facial expressions of a face image |
CN107705365A (zh) * | 2017-09-08 | 2018-02-16 | 郭睿 | 可编辑的三维人体模型创建方法、装置、电子设备及计算机程序产品 |
CN109671016A (zh) * | 2018-12-25 | 2019-04-23 | 网易(杭州)网络有限公司 | 人脸模型的生成方法、装置、存储介质及终端 |
CN110111418A (zh) * | 2019-05-15 | 2019-08-09 | 北京市商汤科技开发有限公司 | 创建脸部模型的方法、装置及电子设备 |
CN110675475A (zh) * | 2019-08-19 | 2020-01-10 | 腾讯科技(深圳)有限公司 | 一种人脸模型生成方法、装置、设备及存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112973122A (zh) * | 2021-03-02 | 2021-06-18 | 网易(杭州)网络有限公司 | 游戏角色上妆方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN110111418A (zh) | 2019-08-09 |
US11836943B2 (en) | 2023-12-05 |
SG11202103190VA (en) | 2021-04-29 |
KR20210047920A (ko) | 2021-04-30 |
US20210209851A1 (en) | 2021-07-08 |
CN110111418B (zh) | 2022-02-25 |
TW202044202A (zh) | 2020-12-01 |
KR102523512B1 (ko) | 2023-04-18 |
JP7191213B2 (ja) | 2022-12-16 |
JP2022500790A (ja) | 2022-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020228389A1 (zh) | 一种创建脸部模型的方法、装置、电子设备及计算机可读存储介质 | |
US10163010B2 (en) | Eye pose identification using eye features | |
CN111354079A (zh) | 三维人脸重建网络训练及虚拟人脸形象生成方法和装置 | |
US20170161551A1 (en) | Face key point positioning method and terminal | |
US20220148333A1 (en) | Method and system for estimating eye-related geometric parameters of a user | |
CN111325846B (zh) | 表情基确定方法、虚拟形象驱动方法、装置及介质 | |
CN108135469A (zh) | 使用眼睛姿态测量的眼睑形状估计 | |
US11282257B2 (en) | Pose selection and animation of characters using video data and training techniques | |
Ming | Robust regional bounding spherical descriptor for 3D face recognition and emotion analysis | |
CN107911643B (zh) | 一种视频通信中展现场景特效的方法和装置 | |
CN104573634A (zh) | 一种三维人脸识别方法 | |
WO2020037963A1 (zh) | 脸部图像识别的方法、装置及存储介质 | |
JP2020177615A (ja) | アバター用の3d顔モデルを生成する方法及び関連デバイス | |
CN113570684A (zh) | 图像处理方法、装置、计算机设备和存储介质 | |
JP2020177620A (ja) | アバター用の3d顔モデルを生成する方法及び関連デバイス | |
CN111815768B (zh) | 三维人脸重建方法和装置 | |
CN111108508A (zh) | 脸部情感识别方法、智能装置和计算机可读存储介质 | |
CN108174141B (zh) | 一种视频通信的方法和一种移动装置 | |
Kim et al. | Real-time facial feature extraction scheme using cascaded networks | |
US11361467B2 (en) | Pose selection and animation of characters using video data and training techniques | |
KR102160955B1 (ko) | 딥 러닝 기반 3d 데이터 생성 방법 및 장치 | |
Xu et al. | A novel method for hand posture recognition based on depth information descriptor | |
US9786030B1 (en) | Providing focal length adjustments | |
WO2023124869A1 (zh) | 用于活体检测的方法、装置、设备及存储介质 | |
CN111222448B (zh) | 图像转换方法及相关产品 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20804755 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021516410 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217008646 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20804755 Country of ref document: EP Kind code of ref document: A1 |