CN109978996B - Method, device, terminal and storage medium for generating expression three-dimensional model - Google Patents

Method, device, terminal and storage medium for generating expression three-dimensional model Download PDF

Info

Publication number
CN109978996B
CN109978996B CN201910244730.9A CN201910244730A CN109978996B CN 109978996 B CN109978996 B CN 109978996B CN 201910244730 A CN201910244730 A CN 201910244730A CN 109978996 B CN109978996 B CN 109978996B
Authority
CN
China
Prior art keywords
expression
dimensional model
determining
expression parameter
parameter value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910244730.9A
Other languages
Chinese (zh)
Other versions
CN109978996A (en
Inventor
帕哈尔丁·帕力万
马里千
周博生
周波
张国鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910244730.9A priority Critical patent/CN109978996B/en
Publication of CN109978996A publication Critical patent/CN109978996A/en
Application granted granted Critical
Publication of CN109978996B publication Critical patent/CN109978996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a method, a device, a terminal and a storage medium for generating an expression three-dimensional model, and belongs to the technical field of face recognition. The method comprises the following steps: determining feature points of the face image according to the face image; determining a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, wherein the expression parameter values are used for expressing the expression of one part in the face; according to each expression parameter value, determining a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value, wherein one three-dimensional model is used for representing the three-dimensional display effect of one part in the face; and generating an expression three-dimensional model according to the three-dimensional model corresponding to each expression parameter value. By adopting the method and the device, the efficiency of generating the expression three-dimensional model can be improved.

Description

Method, device, terminal and storage medium for generating expression three-dimensional model
Technical Field
The present disclosure relates to the field of face recognition technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for generating an expression three-dimensional model.
Background
With the development of internet technology, users can share their lives in a picture or video mode. Sometimes, the user does not want to show the real face of the user, and at this time, the user can choose to use the virtual expression to block the real face of the user. The virtual expression is an expression three-dimensional model which can make corresponding expression changes along with the real expression of the user.
In the related art, the step of generating the expression three-dimensional model according to the real expression of the user may be: the method comprises the steps of firstly determining coordinate information of a plurality of feature points of a face image of a user, obtaining an initial expression three-dimensional model corresponding to a target virtual expression selected by the user, determining initial coordinate information of each feature point in the initial expression three-dimensional model, respectively calculating a coordinate difference value between the coordinate information of each feature point in the face image and the initial coordinate information of the corresponding feature point in the initial expression three-dimensional model, adjusting a three-dimensional model area corresponding to the feature point in the initial expression three-dimensional model according to the coordinate difference value, respectively adjusting the three-dimensional model area corresponding to each coordinate difference value according to the steps, and finally generating the expression three-dimensional model to be displayed. Therefore, the expression three-dimensional model to be displayed is displayed, and the user can see the virtual expression which is the same as the expression of the user through the screen of the terminal.
In implementing the present disclosure, the inventors found that the related art has at least the following problems:
the number of feature points recognized in the face image is large, and the process of adjusting the initial expression three-dimensional model according to each feature point takes long time, so that the speed of generating the expression three-dimensional model is slow, and the efficiency of generating the expression three-dimensional model is low.
Disclosure of Invention
The disclosure provides a method, a device, a terminal and a storage medium for generating an expression three-dimensional model, which can solve the problem of low efficiency of generating the expression three-dimensional model.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for generating an expression three-dimensional model, including:
determining feature points of the face image according to the face image;
determining a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, wherein the expression parameter values are used for expressing the expression of one part in the face;
according to each expression parameter value, determining a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value, wherein one three-dimensional model is used for representing the three-dimensional display effect of one part in the face;
and generating an expression three-dimensional model according to the three-dimensional model corresponding to each expression parameter value.
Optionally, the determining, according to each expression parameter value, a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value includes:
acquiring the corresponding relation between the numerical range corresponding to each expression parameter value and the three-dimensional model;
determining the numerical range of each expression parameter value, and determining the three-dimensional model corresponding to the numerical range of each expression parameter value.
Optionally, the determining a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm includes:
determining a plurality of feature points corresponding to each part in the feature points of the face image as a feature point set to obtain a plurality of feature point sets;
and determining the expression parameter value corresponding to each characteristic point set according to the position information of each characteristic point in each characteristic point set and the expression parameter algorithm corresponding to each characteristic point set.
Optionally, the determining, according to the position information of each feature point in each feature point set and the expression parameter algorithm corresponding to each feature point set, an expression parameter value corresponding to each feature point set includes:
and determining a characteristic value of the position information of the plurality of characteristic points in each characteristic point set according to the position calculation relationship among the plurality of characteristic points in the expression parameter algorithm corresponding to each characteristic point set, wherein the characteristic value is used for representing the muscle stretching degree of a part corresponding to one characteristic point set, and the characteristic value is an expression parameter value corresponding to one characteristic point set.
Optionally, the generating an expression three-dimensional model according to the three-dimensional model corresponding to each expression parameter value includes:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value;
if the expression types of the three-dimensional models corresponding to the expression parameter values are the same, generating expression three-dimensional models according to the three-dimensional models corresponding to the expression parameter values;
and if any expression type in the expression types of the three-dimensional models corresponding to each expression parameter value is different from other expression types, correcting the three-dimensional models, which are not the three-dimensional models corresponding to the expression parameter values of the first expression type, according to the first expression type with the largest number of the three-dimensional models with the same expression type, and generating the expression three-dimensional models according to the three-dimensional models corresponding to each expression parameter value.
Optionally, the method further comprises:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value;
determining an expression special effect corresponding to the expression three-dimensional model according to the expression type with the largest number of three-dimensional models with the same expression type;
and when the expression three-dimensional model is displayed, displaying a corresponding expression special effect based on the expression three-dimensional model.
Optionally, the determining an expression special effect corresponding to the expression three-dimensional model includes:
determining an expression special effect corresponding to the expression three-dimensional model and a display area corresponding to the expression special effect in the expression three-dimensional model;
determining the size of the display area based on a plurality of feature points corresponding to the display area, and determining the display size of the expression special effect according to the size of the display area;
the displaying of the corresponding expression special effect based on the expression three-dimensional model comprises the following steps:
and displaying the expression special effect according to the display area and the display size corresponding to the expression special effect.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for generating an expression three-dimensional model, including:
the determining unit is used for determining the characteristic points of the face image according to the face image;
the determining unit is further configured to determine a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, where the expression parameter values are used to represent the expression of one part in the face;
the determining unit is further configured to determine, according to each expression parameter value, a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value, where one three-dimensional model is used to represent a three-dimensional display effect of a part in a human face;
and the generating unit is used for generating the expression three-dimensional model according to the three-dimensional model corresponding to each expression parameter value.
Optionally, the determining unit is configured to:
acquiring the corresponding relation between the numerical range corresponding to each expression parameter value and the three-dimensional model;
determining the numerical range of each expression parameter value, and determining the three-dimensional model corresponding to the numerical range of each expression parameter value.
Optionally, the determining unit is configured to:
determining a plurality of feature points corresponding to each part in the feature points of the face image as a feature point set to obtain a plurality of feature point sets;
and determining the expression parameter value corresponding to each characteristic point set according to the position information of each characteristic point in each characteristic point set and the expression parameter algorithm corresponding to each characteristic point set.
Optionally, the determining unit is configured to:
and determining a characteristic value of the position information of the plurality of characteristic points in each characteristic point set according to the position calculation relationship among the plurality of characteristic points in the expression parameter algorithm corresponding to each characteristic point set, wherein the characteristic value is used for representing the muscle stretching degree of a part corresponding to one characteristic point set, and the characteristic value is an expression parameter value corresponding to one characteristic point set.
Optionally, the generating unit is configured to:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value;
if the expression types of the three-dimensional models corresponding to the expression parameter values are the same, generating expression three-dimensional models according to the three-dimensional models corresponding to the expression parameter values;
and if any expression type in the expression types of the three-dimensional models corresponding to each expression parameter value is different from other expression types, correcting the three-dimensional models, which are not the three-dimensional models corresponding to the expression parameter values of the first expression type, according to the first expression type with the largest number of the three-dimensional models with the same expression type, and generating the expression three-dimensional models according to the three-dimensional models corresponding to each expression parameter value.
Optionally, the determining unit is further configured to:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value;
determining an expression special effect corresponding to the expression three-dimensional model according to the expression type with the largest number of three-dimensional models with the same expression type;
and the display unit is used for displaying the corresponding expression special effect based on the expression three-dimensional model when the expression three-dimensional model is displayed.
Optionally, the determining unit is configured to:
determining an expression special effect corresponding to the expression three-dimensional model and a display area corresponding to the expression special effect in the expression three-dimensional model;
determining the size of the display area based on a plurality of feature points corresponding to the display area, and determining the display size of the expression special effect according to the size of the display area;
the display unit is used for:
and displaying the expression special effect according to the display area and the display size corresponding to the expression special effect.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, including:
one or more processors;
a memory for storing the one or more processor-executable instructions;
wherein the one or more processors are configured to:
the method of the first aspect of the embodiments of the present disclosure is performed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions, when executed by a processor of a terminal, enable the terminal to perform the method of the first aspect of the embodiments of the present disclosure.
According to a fifth aspect of the embodiments of the present disclosure, there is provided an application program product, which, when running on a terminal, causes the terminal to perform the method of the first aspect of the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the disclosure, a terminal determines feature points of a face image, and then determines a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, wherein the expression parameter values are used for expressing the expression of one part in the face. Then, the terminal determines a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value according to each expression parameter value, one three-dimensional model is used for representing the three-dimensional display effect of one part in the face, and the expression three-dimensional model is generated according to the three-dimensional model corresponding to each expression parameter value. Therefore, the terminal can generate the expression three-dimensional model according to the prestored three-dimensional model corresponding to each expression parameter value, the expression three-dimensional model is generated without adopting a mode of adjusting the initial expression three-dimensional model according to the coordinates of each feature point, and the number of the expression parameter values in the method is much smaller than that of the feature points in the related technology, so that the operation of generating the expression three-dimensional model according to the three-dimensional model corresponding to the expression parameter values in the method is simplified, the time spent on generating the expression three-dimensional model is short, the speed is high, and the efficiency of generating the expression three-dimensional model is high.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating a method of generating an expressive three-dimensional model in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of generating an expressive three-dimensional model in accordance with an exemplary embodiment;
FIG. 3 is a diagram illustrating a scene generating an expressive three-dimensional model in accordance with an exemplary embodiment;
FIG. 4 is an interface diagram illustrating the generation of an expressive three-dimensional model according to an exemplary embodiment;
FIG. 5 is an interface diagram illustrating a generation of an expressive three-dimensional model according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating an apparatus for generating an expressive three-dimensional model in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating an apparatus for generating an expressive three-dimensional model in accordance with an exemplary embodiment;
fig. 8 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a method of generating an expressive three-dimensional model according to an exemplary embodiment, which is used in a terminal, as shown in fig. 1, and includes the following steps.
In step 101, feature points of the face image are determined according to the face image.
In step 102, a plurality of expression parameter values are determined according to the feature points of the face image and an expression parameter algorithm, and the expression parameter values are used for representing the expression of one part in the face.
In step 103, according to each expression parameter value, a three-dimensional model corresponding to each expression parameter value is determined in a plurality of three-dimensional models corresponding to each expression parameter value, and one three-dimensional model is used for representing a three-dimensional display effect of a part in a human face.
In step 104, an expression three-dimensional model is generated according to the three-dimensional model corresponding to each expression parameter value.
Optionally, determining, according to each expression parameter value, a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value, includes:
acquiring the corresponding relation between the numerical range corresponding to each expression parameter value and the three-dimensional model;
and determining the numerical range of each expression parameter value, and determining the three-dimensional model corresponding to the numerical range of each expression parameter value.
Optionally, determining a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, including:
determining a plurality of feature points corresponding to each part in the feature points of the face image as a feature point set to obtain a plurality of feature point sets;
and determining the expression parameter value corresponding to each characteristic point set according to the position information of each characteristic point in each characteristic point set and the expression parameter algorithm corresponding to each characteristic point set.
Optionally, determining an expression parameter value corresponding to each feature point set according to the position information of each feature point in each feature point set and an expression parameter algorithm corresponding to each feature point set, including:
and determining a characteristic value of the position information of the plurality of characteristic points in each characteristic point set according to the position calculation relationship among the plurality of characteristic points in the expression parameter algorithm corresponding to each characteristic point set, wherein the characteristic value is used for expressing the muscle stretching degree of a part corresponding to one characteristic point set, and the characteristic value is an expression parameter value corresponding to one characteristic point set.
Optionally, generating an expression three-dimensional model according to the three-dimensional model corresponding to each expression parameter value, including:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value;
if the expression types of the three-dimensional models corresponding to the expression parameter values are the same, generating expression three-dimensional models according to the three-dimensional models corresponding to the expression parameter values;
and if any expression type in the expression types of the three-dimensional models corresponding to each expression parameter value is different from other expression types, correcting the three-dimensional models, the expression types of which are not the three-dimensional models corresponding to the expression parameter values of the first expression type, according to the first expression type with the largest number of the three-dimensional models with the same expression type, and generating the expression three-dimensional models according to the three-dimensional models corresponding to each expression parameter value.
Optionally, the method further comprises:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value;
determining an expression special effect corresponding to the expression three-dimensional model according to the expression type with the largest number of three-dimensional models with the same expression type;
and when the expression three-dimensional model is displayed, displaying a corresponding expression special effect based on the expression three-dimensional model.
Optionally, determining an expression special effect corresponding to the expression three-dimensional model includes:
determining an expression special effect corresponding to the expression three-dimensional model and a display area corresponding to the expression special effect in the expression three-dimensional model;
determining the size of the display area based on a plurality of characteristic points corresponding to the display area, and determining the display size of the expression special effect according to the size of the display area;
based on the expression three-dimensional model, displaying a corresponding expression special effect, comprising the following steps:
and displaying the expression special effect according to the display area and the display size corresponding to the expression special effect.
In the disclosure, a terminal determines feature points of a face image, and then determines a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, wherein the expression parameter values are used for expressing the expression of one part in the face. Then, the terminal determines a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value according to each expression parameter value, one three-dimensional model is used for representing the three-dimensional display effect of one part in the face, and the expression three-dimensional model is generated according to the three-dimensional model corresponding to each expression parameter value. Therefore, the terminal can generate the expression three-dimensional model according to the prestored three-dimensional model corresponding to each expression parameter value, the expression three-dimensional model is generated without adopting a mode of adjusting the initial expression three-dimensional model according to the coordinates of each feature point, and the number of the expression parameter values in the method is much smaller than that of the feature points in the related technology, so that the operation of generating the expression three-dimensional model according to the three-dimensional model corresponding to the expression parameter values in the method is simplified, the time spent on generating the expression three-dimensional model is short, the speed is high, and the efficiency of generating the expression three-dimensional model is high.
Fig. 2 is a flowchart illustrating a method of generating an expressive three-dimensional model according to an exemplary embodiment, which is used in a terminal, as shown in fig. 2, and includes the following steps.
In step 201, feature points of the face image are determined according to the face image.
In a possible implementation manner, when a user wants to use a virtual emoticon, a corresponding application program can be opened on the terminal, so that the terminal can display a preview image acquired by the camera, and the user can select to acquire the image by using the front camera or the rear camera.
Through the preview interface, the user can see the preview image currently acquired by the terminal. When a user wants to use a virtual expression, the user can select the virtual expression to be used, the selection mode can be that the user clicks an icon of the corresponding virtual expression on a screen of the terminal, and then the terminal can receive the virtual expression selected by the user and acquire the corresponding initial expression three-dimensional model. The three-dimensional model is a three-dimensional digital figure which is generated by technicians through a three-dimensional modeling tool and is composed of grids and textures, and the initial expression three-dimensional model is an initial expression three-dimensional model which is pre-constructed for each virtual expression by the technicians.
It should be noted that the step of selecting the virtual expression by the user may occur when the user uses the terminal to preview, may also occur during the process of the user using the terminal to shoot the video, and may also occur after the user finishes shooting the video or finishes shooting the image using the terminal, which is not limited by the present disclosure.
Then, the terminal acquires a face image, which can be obtained from, but is not limited to, the following feasible ways:
(1) the face image may be a face image extracted from any preview image when the terminal acquires the preview image.
(2) The face image may be a face image extracted from any one of the acquired video frames in the process of recording a video by the terminal.
(3) The face image can be a face image extracted from any video frame in a video stored in the terminal.
(4) The face image may be a face image extracted from any image already stored in the terminal.
It should be noted that the method for extracting a face image from a video frame or an image may be, but is not limited to, a local feature analysis method, a feature face method, a neural network method, or an elastic model-based method, and the disclosure is not limited thereto.
After the face image is obtained, the terminal determines a plurality of feature points in the face image according to a preset face feature point recognition algorithm, as shown in fig. 3, and each feature point in the determined plurality of feature points is preset with a corresponding face part.
Facial feature point recognition algorithms may include, but are not limited to, ASM (Active Shape Model) algorithms, AAM (Active Appearance Model) algorithms, CLM (Constrained local Model) algorithms, CNN (Convolutional Neural Networks) models, and the like.
The algorithms have the characteristics of high accuracy and small calculated amount, when the algorithms are adopted to identify the feature points of the face image, the requirements on the computing capability of a terminal and the quality of the image are low, most of smart phones, tablet computers and the like can render expression three-dimensional models by adopting the scheme disclosed by the invention, the expression three-dimensional models can be rendered without using images shot by professional cameras, and the calculation is not required by computers, so that the virtual expressions are displayed more conveniently and quickly.
In step 202, a plurality of feature points corresponding to each part in the feature points of the face image are determined as a feature point set, so as to obtain a plurality of feature point sets.
In a possible implementation manner, the feature points of the face image are determined through the steps, the feature points of the face image are divided according to the corresponding parts, and the feature points corresponding to the same part are divided into a feature point set. That is, each feature point set corresponds to a part of one face.
Specifically, the feature points of each identified face image have corresponding feature point identifiers, and the meaning of the feature points corresponding to each feature point identifier identified according to the same facial feature point identification algorithm is the same, that is, the feature points corresponding to the same identified feature point identifier correspond to the same parts when different face images are identified by using the same facial feature point identification algorithm. In this way, a technician may preset a part corresponding to each feature point identifier, and after performing facial feature point identification through step 201, the terminal obtains the pre-stored feature point identifier corresponding to each part, and divides the feature points corresponding to the feature point identifiers corresponding to each part into a feature point set. For example, if the feature points corresponding to the eye parts are identified as feature points 1-29, the feature points identified as feature points 1-29 are divided into a feature point set.
In step 203, an expression parameter value corresponding to each feature point set is determined according to the position information of each feature point in each feature point set and an expression parameter algorithm corresponding to each feature point set.
The expression parameter value is used for representing the expression of one part in the human face.
In a possible implementation manner, each feature point set determined in the above steps corresponds to different parts in the face image, and the expression change degree of each part is different, so that a technician designs an expression parameter algorithm in advance according to the expression change characteristic of each part, and stores the expression parameter algorithm in the terminal, where the expression parameter algorithm may be an algorithm that calculates an expression parameter value according to the position information of the feature point of each part.
After the terminal determines the plurality of feature point sets, the position information of each feature point in each feature point set may be determined, and optionally, the position information may be coordinate information in the face image. The terminal obtains an expression parameter algorithm corresponding to each pre-stored feature point set, and calculates an expression parameter value corresponding to each feature point set according to the expression parameter algorithm and the position information of the feature points.
Optionally, the specific process of calculating the expression parameter value may be: and determining a characteristic value of the position information of the plurality of characteristic points in each characteristic point set according to the position calculation relationship among the plurality of characteristic points in the expression parameter algorithm corresponding to each characteristic point set, wherein the characteristic value is used for expressing the muscle stretching degree of a part corresponding to one characteristic point set, and the characteristic value is an expression parameter value corresponding to one characteristic point set.
In a possible implementation manner, the expression parameter algorithm is an algorithm preset by a technician, expression parameter values corresponding to the feature point set may be calculated through multiple calculation manners, and an optional manner may be to calculate the expression parameter values through a position calculation relationship among multiple feature points.
The terminal obtains the position calculation relationship among a plurality of feature points in the expression parameter algorithm corresponding to each feature point set, and determines the feature value of the feature point set according to the position information calculation method of each feature point in the position calculation relationship. Since each feature point set may represent the shape of a portion in the face image, the determined feature value may be a value representing the degree of change of the portion, or may be used to represent the degree of muscle stretch of the portion, and the feature value is an expression parameter value corresponding to the feature point set.
For example, it is assumed that feature point identifiers of a feature point set corresponding to a mouth portion are 31 to 50, and a position calculation relationship of an expression parameter algorithm corresponding to the mouth portion is that difference values of position information of feature points of the feature point identifiers 31 to 40 and feature points of the feature point identifiers 41 to 50 are determined, then a mean value of all the difference values is determined, and the determined mean value is a feature value of the feature point set.
It should be noted that the step 201-.
In step 204, according to each expression parameter value, a three-dimensional model corresponding to each expression parameter value is determined in a plurality of three-dimensional models corresponding to each expression parameter value.
Wherein, a three-dimensional model is used for representing the three-dimensional display effect of a part in the human face.
In a possible implementation manner, in order to accelerate the speed of generating the expression three-dimensional model, a technician stores a plurality of three-dimensional models corresponding to each expression parameter value in the terminal in advance, that is, the technician stores the three-dimensional model corresponding to each expression in the terminal in advance, so that when the terminal generates the expression three-dimensional model, the terminal can directly acquire the three-dimensional model of each part of the face, and the operation of generating the expression three-dimensional model is simplified.
After the expression parameter value corresponding to each feature point set is determined through the steps, for each expression parameter value, one three-dimensional model matched with the expression parameter value is selected from a plurality of three-dimensional models corresponding to the expression parameter value and is used as the three-dimensional model corresponding to the expression parameter value.
Optionally, one processing manner for selecting the three-dimensional model corresponding to the expression parameter value may be as follows: acquiring the corresponding relation between the numerical range corresponding to each expression parameter value and the three-dimensional model; and determining the numerical range of each expression parameter value, and determining the three-dimensional model corresponding to the numerical range of each expression parameter value.
In a possible implementation manner, the terminal stores in advance the corresponding relationship between the numerical range corresponding to each expression parameter value and the three-dimensional model.
For each expression parameter value, the terminal firstly obtains the corresponding relation between the numerical range corresponding to the expression parameter value and the three-dimensional model, then the terminal determines the numerical range in which the expression parameter value is located, and in the corresponding relation between the numerical range and the three-dimensional model, the three-dimensional model corresponding to the numerical range is inquired, namely the three-dimensional model corresponding to the expression parameter value. Optionally, the corresponding relationship between the numerical range corresponding to the expression parameter value and the three-dimensional model may be stored in the terminal in the form of a corresponding relationship table, where the three-dimensional model may be represented in the form of a three-dimensional model identifier. For example, the table of the correspondence between the numerical range corresponding to the expression parameter value and the three-dimensional model identifier may be as shown in table 1 below.
TABLE 1
Numerical range Three-dimensional model identification
01-10 A001
11-20 A002
21-30 A003
…… ……
In step 205, the expression type of the three-dimensional model corresponding to each expression parameter value is determined.
In a possible implementation manner, after the three-dimensional model corresponding to each expression parameter value is determined through the steps, the terminal obtains the corresponding relation between the three-dimensional model and the expression type, which is stored in advance. Optionally, each three-dimensional model corresponds to a three-dimensional model identifier, and the correspondence between the three-dimensional model stored in the terminal and the expression type may be a correspondence between the three-dimensional model identifier and the expression type, and the correspondence may be stored in the form of a correspondence table.
For example, the correspondence table between the three-dimensional model identifier and the expression type may be as shown in table 2 below.
TABLE 2
Three-dimensional model identification Expression type
A001 Anger and anger
A002 Generating qi
A003 Committing and bending
…… ……
In addition, the expression type may also correspond to an expression type identifier, that is, the correspondence between the three-dimensional model and the expression type may also be a correspondence between a three-dimensional model identifier and an expression type identifier, which is not limited in this disclosure.
In step 206, if the expression types of the three-dimensional models corresponding to each expression parameter value are the same, an expression three-dimensional model is generated according to the three-dimensional model corresponding to each expression parameter value.
In a possible implementation manner, after the expression type of the three-dimensional model corresponding to each expression parameter value is determined through the above steps, the expression types of the three-dimensional models corresponding to each expression parameter value may be compared. If the expression types of the three-dimensional models corresponding to the expression parameter values are the same, it is indicated that the expressions of each part determined through the steps 201 to 205 are consistent, so that the determination error probability of the determined three-dimensional models is very small, and therefore, the three-dimensional models corresponding to the determined expression parameter values can be directly spliced in the initial expression three-dimensional model to generate the expression three-dimensional model corresponding to the facial image.
In step 207, if any expression type in the expression types of the three-dimensional models corresponding to each expression parameter value is different from other expression types, the three-dimensional model corresponding to the expression parameter value, the expression type of which is not the first expression type, is corrected according to the first expression type with the largest number of three-dimensional models with the same expression type, and the expression three-dimensional model is generated according to the three-dimensional model corresponding to each expression parameter value.
In a possible implementation manner, if the expression types of the three-dimensional models corresponding to each expression parameter value are compared, it is determined that any one of the expression types of the three-dimensional models corresponding to all expression parameter values is different from other expression types, that is, two or more expression types exist in the expression types of the three-dimensional models corresponding to all expression parameter values, it is indicated that the expression types of the three-dimensional models corresponding to each expression parameter value are different, that is, the expressions of each part identified in the face image are not consistent, and it is indicated that a three-dimensional model with an identification error may exist in the three-dimensional models determined through the above steps. For example, the expression type of the three-dimensional model corresponding to the recognized eye, eyebrow, nose, and the like is angry, while the expression type of the three-dimensional model corresponding to the recognized mouth is happy, which may indicate that a recognition error occurs in the expression type of the three-dimensional model corresponding to the mouth.
When the situation occurs, the terminal counts the number of the three-dimensional models respectively corresponding to the at least two identified expression types, and the larger the number of the three-dimensional models corresponding to the expression types is, the more likely the expression types are the correct expression types corresponding to the facial images, so that the expression type (which may be called a first expression type) with the largest number of the three-dimensional models with the same expression type is determined, and for each expression type which is not the expression parameter value of the first expression type, the three-dimensional model with the expression type being the first expression type is selected as the correct three-dimensional model from the plurality of three-dimensional models corresponding to the expression parameter value and is used for replacing the originally determined three-dimensional model with the expression parameter value. And after the three-dimensional model of which each expression type is not the expression parameter value of the first expression type is replaced, the expression types of the three-dimensional models corresponding to the expression parameter values are the same. Then, the three-dimensional model corresponding to each expression parameter value can be spliced in the initial expression three-dimensional model to generate an expression three-dimensional model corresponding to the facial image.
Thus, if the terminal has an error while recognizing a certain portion in the face image, the correction can be performed through the above step 207. Moreover, when the emotion expression ability of the user is weak, for example, the eyes of the user are small, in the shot face image, the expression similar to the "glazelle" is difficult to recognize according to the eye part of the user, but the expression of the "glazelle" may be recognized by other parts (such as the eyebrow, the forehead and the like), so that the three-dimensional model of the eye part where the "glazelle" expression cannot be recognized can be corrected, the generated expression three-dimensional model is closer to the real expression of the user, and the accuracy of generating the expression three-dimensional model is improved.
It should be noted that the step 205-.
In step 208, the expressive three-dimensional model is displayed.
In a possible implementation manner, after the expression three-dimensional model corresponding to the face image is generated, the terminal displays the expression three-dimensional model, and the user can view the expression three-dimensional model through the terminal, as shown in fig. 4 and 5. If the facial image is the facial image in any video frame in the video, the terminal performs the processing of the steps 201 and 207 on the facial image in each video frame in the video to obtain an expression three-dimensional model corresponding to the facial image in each video frame, and displays the expression three-dimensional model corresponding to the facial image in each video frame. Therefore, with the playing of the video, the user can see the virtual expression which changes in real time with the face in the video, and the interestingness is increased.
Optionally, in order to increase the interest of the virtual expression, a corresponding expression special effect may be added according to the expression of the user, and the corresponding processing manner may be as follows: respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value; determining an expression special effect corresponding to the expression three-dimensional model according to the expression type with the largest number of three-dimensional models with the same expression type; and when the expression three-dimensional model is displayed, displaying the corresponding expression special effect based on the expression three-dimensional model.
In a possible implementation manner, after the expression type of the three-dimensional model corresponding to each expression parameter value is determined in step 205, since the larger the number of the three-dimensional models corresponding to the expression type is, the more likely the expression type is to be the correct expression type corresponding to the facial image, the expression type (which may be referred to as a first expression type) with the largest number of the three-dimensional models with the same expression type is determined, and the terminal determines the expression special effect corresponding to the first expression type according to the pre-stored correspondence between the expression type and the expression special effect, where the expression special effect is the expression special effect corresponding to the facial expression of the face in the facial image. And then, when the terminal displays the expression three-dimensional model, displaying the determined expression special effect at the part corresponding to the expression three-dimensional model of the face image.
For example, if the expression type of the three-dimensional model corresponding to the expression parameter value determined in step 205 includes anger, happiness and surprise, and the number of the three-dimensional models corresponding to the anger expression type is the largest, the terminal determines an expression special effect corresponding to anger in the corresponding relationship between the expression type and the expression special effect, and if the expression special effect corresponding to anger is a special effect of mouth flaming, the terminal displays the special effect of flaming at the mouth part of the expression three-dimensional model when displaying the expression three-dimensional model. Therefore, the method can help the user to emphasize the expression of certain emotion, and is more interesting.
Optionally, the specific processing manner for determining the expression special effect may be as follows: determining an expression special effect corresponding to the expression three-dimensional model and a display area corresponding to the expression special effect in the expression three-dimensional model; determining the size of the display area based on a plurality of feature points corresponding to the display area, and determining the display size of the expression special effect according to the size of the display area; and displaying the expression special effect according to the display area and the display size corresponding to the expression special effect.
In a possible implementation manner, when the terminal determines the expression special effect corresponding to the first expression type according to the pre-stored corresponding relationship between the expression type and the expression special effect, the terminal may further determine a display area corresponding to the expression special effect in the expression three-dimensional model. Optionally, the display area may be a feature point set, and the position information of the display area is determined according to the position information of the plurality of feature points in the corresponding feature point set.
After the display area is determined, the size of the display area is calculated according to the position information of the plurality of feature points corresponding to the display area, the display size of the expression special effect is determined according to the size of the display area, and the size of the initial expression special effect is adjusted according to the display size of the expression special effect. And when the expression three-dimensional model is displayed, displaying the expression special effect after the size is adjusted in a display area corresponding to the expression three-dimensional model.
In the disclosure, a terminal determines feature points of a face image, and then determines a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, wherein the expression parameter values are used for expressing the expression of one part in the face. Then, the terminal determines a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value according to each expression parameter value, one three-dimensional model is used for representing the three-dimensional display effect of one part in the face, and the expression three-dimensional model is generated according to the three-dimensional model corresponding to each expression parameter value. Therefore, the terminal can generate the expression three-dimensional model according to the prestored three-dimensional model corresponding to each expression parameter value, the expression three-dimensional model is generated without adopting a mode of adjusting the initial expression three-dimensional model according to the coordinates of each feature point, and the number of the expression parameter values in the method is much smaller than that of the feature points in the related technology, so that the operation of generating the expression three-dimensional model according to the three-dimensional model corresponding to the expression parameter values in the method is simplified, the time spent on generating the expression three-dimensional model is short, the speed is high, and the efficiency of generating the expression three-dimensional model is high.
FIG. 6 is a block diagram illustrating an apparatus for generating an expressive three-dimensional model according to an exemplary embodiment. Referring to fig. 6, the apparatus includes a determination unit 610 and a generation unit 620.
A determining unit 610, configured to determine feature points of a face image according to the face image;
the determining unit 610 is further configured to determine a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, where the expression parameter values are used to represent an expression of a part in a face;
the determining unit 610 is further configured to determine, according to each expression parameter value, a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value, where one three-dimensional model is used to represent a three-dimensional display effect of a part in a human face;
and the generating unit 620 is configured to generate an expression three-dimensional model according to the three-dimensional model corresponding to each expression parameter value.
Optionally, the determining unit 610 is configured to:
acquiring the corresponding relation between the numerical range corresponding to each expression parameter value and the three-dimensional model;
determining the numerical range of each expression parameter value, and determining the three-dimensional model corresponding to the numerical range of each expression parameter value.
Optionally, the determining unit 610 is configured to:
determining a plurality of feature points corresponding to each part in the feature points of the face image as a feature point set to obtain a plurality of feature point sets;
and determining the expression parameter value corresponding to each characteristic point set according to the position information of each characteristic point in each characteristic point set and the expression parameter algorithm corresponding to each characteristic point set.
Optionally, the determining unit 610 is configured to:
and determining a characteristic value of the position information of the plurality of characteristic points in each characteristic point set according to the position calculation relationship among the plurality of characteristic points in the expression parameter algorithm corresponding to each characteristic point set, wherein the characteristic value is used for representing the muscle stretching degree of a part corresponding to one characteristic point set, and the characteristic value is an expression parameter value corresponding to one characteristic point set.
Optionally, the generating unit 620 is configured to:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value;
if the expression types of the three-dimensional models corresponding to the expression parameter values are the same, generating expression three-dimensional models according to the three-dimensional models corresponding to the expression parameter values;
and if any expression type in the expression types of the three-dimensional models corresponding to each expression parameter value is different from other expression types, correcting the three-dimensional models, which are not the three-dimensional models corresponding to the expression parameter values of the first expression type, according to the first expression type with the largest number of the three-dimensional models with the same expression type, and generating the expression three-dimensional models according to the three-dimensional models corresponding to each expression parameter value.
Optionally, as shown in fig. 7, the determining unit 610 is further configured to:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value;
determining an expression special effect corresponding to the expression three-dimensional model according to the expression type with the largest number of three-dimensional models with the same expression type;
and a display unit 630, configured to display a corresponding expression special effect based on the expression three-dimensional model when the expression three-dimensional model is displayed.
Optionally, the determining unit 610 is configured to:
determining an expression special effect corresponding to the expression three-dimensional model and a display area corresponding to the expression special effect in the expression three-dimensional model;
determining the size of the display area based on a plurality of feature points corresponding to the display area, and determining the display size of the expression special effect according to the size of the display area;
the display unit 630 is configured to:
and displaying the expression special effect according to the display area and the display size corresponding to the expression special effect.
In the disclosure, a terminal determines feature points of a face image, and then determines a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, wherein the expression parameter values are used for expressing the expression of one part in the face. Then, the terminal determines a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value according to each expression parameter value, one three-dimensional model is used for representing the three-dimensional display effect of one part in the face, and the expression three-dimensional model is generated according to the three-dimensional model corresponding to each expression parameter value. Therefore, the terminal can generate the expression three-dimensional model according to the prestored three-dimensional model corresponding to each expression parameter value, the expression three-dimensional model is generated without adopting a mode of adjusting the initial expression three-dimensional model according to the coordinates of each feature point, and the number of the expression parameter values in the method is much smaller than that of the feature points in the related technology, so that the operation of generating the expression three-dimensional model according to the three-dimensional model corresponding to the expression parameter values in the method is simplified, the time spent on generating the expression three-dimensional model is short, the speed is high, and the efficiency of generating the expression three-dimensional model is high.
Fig. 8 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment. The terminal 800 may be a portable mobile terminal such as: smart phones, tablet computers. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, etc.
In general, the terminal 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one instruction for execution by processor 801 to implement the methods of generating an expressive three-dimensional model provided herein.
In some embodiments, the terminal 800 may further include: a peripheral interface 803 and at least one peripheral. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a touch screen display 805, a camera 806, an audio circuit 807, a positioning component 808, and a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The touch display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. Touch display 805 also has the ability to capture touch signals on or above the surface of touch display 805. The touch signal may be input to the processor 801 as a control signal for processing. The touch screen display 805 is used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the touch display 805 may be one, providing the front panel of the terminal 800; in other embodiments, the touch display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in still other embodiments, the touch display 805 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 800. Even further, the touch screen 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The touch screen 805 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 is used to provide an audio interface between the user and the terminal 800. The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
The positioning component 808 is used to locate the current geographic position of the terminal 800 for navigation or LBS (Location Based Service). The Positioning component 808 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 809 is used to provide power to various components in terminal 800. The power supply 809 can be ac, dc, disposable or rechargeable. When the power supply 809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815 and proximity sensor 816.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the touch screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may cooperate with the acceleration sensor 811 to acquire a 3D motion of the user with respect to the terminal 800. From the data collected by the gyro sensor 812, the processor 801 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side bezel of terminal 800 and/or underneath touch display 805. When the pressure sensor 813 is disposed on the side frame of the terminal 800, a user's grip signal on the terminal 800 can be detected, and left-right hand recognition or shortcut operation can be performed based on the grip signal. When the pressure sensor 813 is disposed at the lower layer of the touch display screen 805, it is possible to control an operability control on the UI interface according to a pressure operation of the user on the touch display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 814 is used for collecting a fingerprint of the user to identify the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 814 may be disposed on the front, back, or side of terminal 800. When a physical button or a vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the touch screen 805 based on the ambient light intensity collected by the optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 805 is increased; when the ambient light intensity is low, the display brightness of the touch display 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also known as a distance sensor, is typically provided on the front side of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually decreases, the processor 801 controls the touch display 805 to switch from the bright screen state to the dark screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 becomes gradually larger, the processor 801 controls the touch display 805 to switch from the screen-on state to the screen-on state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In the disclosure, a terminal determines feature points of a face image, and then determines a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, wherein the expression parameter values are used for expressing the expression of one part in the face. Then, the terminal determines a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value according to each expression parameter value, one three-dimensional model is used for representing the three-dimensional display effect of one part in the face, and the expression three-dimensional model is generated according to the three-dimensional model corresponding to each expression parameter value. Therefore, the terminal can generate the expression three-dimensional model according to the prestored three-dimensional model corresponding to each expression parameter value, the expression three-dimensional model is generated without adopting a mode of adjusting the initial expression three-dimensional model according to the coordinates of each feature point, and the number of the expression parameter values in the method is much smaller than that of the feature points in the related technology, so that the operation of generating the expression three-dimensional model according to the three-dimensional model corresponding to the expression parameter values in the method is simplified, the time spent on generating the expression three-dimensional model is short, the speed is high, and the efficiency of generating the expression three-dimensional model is high.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 802 comprising instructions, executable by the processor 801 of the apparatus 800 to perform the above-described method of generating an expressive three-dimensional model is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, an application program product is also provided, which comprises one or more instructions executable by the processor 801 of the apparatus 800 to perform the above-described method of generating an expressive three-dimensional model.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. A method of generating a three-dimensional model of an expression, comprising:
determining feature points of the face image according to the face image;
determining a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, wherein each expression parameter value is used for expressing the expression of one part in the face;
according to each expression parameter value in the expression parameter values, determining a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value, wherein one three-dimensional model is used for representing the three-dimensional display effect of one part in the face;
and generating an expression three-dimensional model according to the three-dimensional models corresponding to the expression parameter values.
2. The method for generating the expression three-dimensional model according to claim 1, wherein the determining, according to each expression parameter value of the plurality of expression parameter values, the three-dimensional model corresponding to each expression parameter value in the plurality of three-dimensional models corresponding to each expression parameter value comprises:
acquiring the corresponding relation between the numerical range corresponding to each expression parameter value in the expression parameter values and the three-dimensional model;
determining the numerical range of each expression parameter value, and determining the three-dimensional model corresponding to the numerical range of each expression parameter value.
3. The method for generating the expression three-dimensional model according to claim 1, wherein the determining a plurality of expression parameter values according to the feature points of the facial image and an expression parameter algorithm comprises:
determining a plurality of feature points corresponding to each part in the feature points of the face image as a feature point set to obtain a plurality of feature point sets;
and determining the expression parameter value corresponding to each characteristic point set according to the position information of each characteristic point in each characteristic point set and the expression parameter algorithm corresponding to each characteristic point set.
4. The method for generating the expression three-dimensional model according to claim 3, wherein the determining the expression parameter value corresponding to each feature point set according to the position information of each feature point in each feature point set and the expression parameter algorithm corresponding to each feature point set comprises:
and determining a characteristic value of the position information of the plurality of characteristic points in each characteristic point set according to the position calculation relationship among the plurality of characteristic points in the expression parameter algorithm corresponding to each characteristic point set, wherein the characteristic value is used for representing the muscle stretching degree of a part corresponding to one characteristic point set, and the characteristic value is an expression parameter value corresponding to one characteristic point set.
5. The method for generating the expression three-dimensional model according to claim 1, wherein the generating the expression three-dimensional model according to the three-dimensional models corresponding to the expression parameter values comprises:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value in the expression parameter values;
if the expression types of the three-dimensional models corresponding to the expression parameter values are the same, generating expression three-dimensional models according to the three-dimensional models corresponding to the expression parameter values;
and if any expression type in the expression types of the three-dimensional models corresponding to each expression parameter value is different from other expression types, correcting the three-dimensional models, which are not the three-dimensional models corresponding to the expression parameter values of the first expression type, according to the first expression type with the largest number of the three-dimensional models with the same expression type, and generating the expression three-dimensional models according to the three-dimensional models corresponding to the expression parameter values.
6. The method of generating an expressive three-dimensional model as claimed in claim 1, further comprising:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value in the expression parameter values;
determining an expression special effect corresponding to the expression three-dimensional model according to the expression type with the largest number of three-dimensional models with the same expression type;
and when the expression three-dimensional model is displayed, displaying a corresponding expression special effect based on the expression three-dimensional model.
7. The method for generating the expression three-dimensional model according to claim 6, wherein the determining the expression special effect corresponding to the expression three-dimensional model comprises:
determining an expression special effect corresponding to the expression three-dimensional model and a display area corresponding to the expression special effect in the expression three-dimensional model;
determining the size of the display area based on a plurality of feature points corresponding to the display area, and determining the display size of the expression special effect according to the size of the display area;
the displaying of the corresponding expression special effect based on the expression three-dimensional model comprises the following steps:
and displaying the expression special effect according to the display area and the display size corresponding to the expression special effect.
8. An apparatus for generating a three-dimensional model of an expression, comprising:
the determining unit is used for determining the characteristic points of the face image according to the face image;
the determining unit is further configured to determine a plurality of expression parameter values according to the feature points of the face image and an expression parameter algorithm, where each expression parameter value is used to represent an expression of a part in a face;
the determining unit is further configured to determine, according to each expression parameter value of the plurality of expression parameter values, a three-dimensional model corresponding to each expression parameter value in a plurality of three-dimensional models corresponding to each expression parameter value, where one three-dimensional model is used to represent a three-dimensional display effect of a part in a human face;
and the generating unit is used for generating the expression three-dimensional model according to the three-dimensional models corresponding to the expression parameter values.
9. The apparatus for generating an expressive three-dimensional model according to claim 8, wherein the determining unit is configured to:
acquiring the corresponding relation between the numerical range corresponding to each expression parameter value in the expression parameter values and the three-dimensional model;
determining the numerical range of each expression parameter value, and determining the three-dimensional model corresponding to the numerical range of each expression parameter value.
10. The apparatus for generating an expressive three-dimensional model according to claim 8, wherein the determining unit is configured to:
determining a plurality of feature points corresponding to each part in the feature points of the face image as a feature point set to obtain a plurality of feature point sets;
and determining the expression parameter value corresponding to each characteristic point set according to the position information of each characteristic point in each characteristic point set and the expression parameter algorithm corresponding to each characteristic point set.
11. The apparatus for generating an expressive three-dimensional model according to claim 10, wherein the determining unit is configured to:
and determining a characteristic value of the position information of the plurality of characteristic points in each characteristic point set according to the position calculation relationship among the plurality of characteristic points in the expression parameter algorithm corresponding to each characteristic point set, wherein the characteristic value is used for representing the muscle stretching degree of a part corresponding to one characteristic point set, and the characteristic value is an expression parameter value corresponding to one characteristic point set.
12. The apparatus for generating an expressive three-dimensional model according to claim 8, wherein the generating unit is configured to:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value in the expression parameter values;
if the expression types of the three-dimensional models corresponding to the expression parameter values are the same, generating expression three-dimensional models according to the three-dimensional models corresponding to the expression parameter values;
and if any expression type in the expression types of the three-dimensional models corresponding to each expression parameter value is different from other expression types, correcting the three-dimensional models, which are not the three-dimensional models corresponding to the expression parameter values of the first expression type, according to the first expression type with the largest number of the three-dimensional models with the same expression type, and generating the expression three-dimensional models according to the three-dimensional models corresponding to the expression parameter values.
13. The apparatus for generating an expressive three-dimensional model according to claim 8, wherein the determining unit is further configured to:
respectively determining the expression type of the three-dimensional model corresponding to each expression parameter value in the expression parameter values;
determining an expression special effect corresponding to the expression three-dimensional model according to the expression type with the largest number of three-dimensional models with the same expression type;
and the display unit is used for displaying the corresponding expression special effect based on the expression three-dimensional model when the expression three-dimensional model is displayed.
14. The apparatus for generating an expressive three-dimensional model as claimed in claim 13, wherein the determining unit is configured to:
determining an expression special effect corresponding to the expression three-dimensional model and a display area corresponding to the expression special effect in the expression three-dimensional model;
determining the size of the display area based on a plurality of feature points corresponding to the display area, and determining the display size of the expression special effect according to the size of the display area;
the display unit is used for:
and displaying the expression special effect according to the display area and the display size corresponding to the expression special effect.
15. A terminal, comprising:
one or more processors;
a memory for storing the one or more processor-executable instructions; wherein the one or more processors are configured to: performing the method of any one of claims 1-7.
16. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a terminal, enable the terminal to perform a method of generating an expressive three-dimensional model, the method comprising: performing the method of any one of claims 1-7.
CN201910244730.9A 2019-03-28 2019-03-28 Method, device, terminal and storage medium for generating expression three-dimensional model Active CN109978996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910244730.9A CN109978996B (en) 2019-03-28 2019-03-28 Method, device, terminal and storage medium for generating expression three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910244730.9A CN109978996B (en) 2019-03-28 2019-03-28 Method, device, terminal and storage medium for generating expression three-dimensional model

Publications (2)

Publication Number Publication Date
CN109978996A CN109978996A (en) 2019-07-05
CN109978996B true CN109978996B (en) 2021-06-11

Family

ID=67081386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910244730.9A Active CN109978996B (en) 2019-03-28 2019-03-28 Method, device, terminal and storage medium for generating expression three-dimensional model

Country Status (1)

Country Link
CN (1) CN109978996B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220292751A1 (en) * 2019-08-16 2022-09-15 Sony Group Corporation Image processing device, image processing method, and program
CN111369428B (en) * 2020-03-09 2023-07-21 北京百度网讯科技有限公司 Virtual head portrait generation method and device
CN113763531B (en) * 2020-06-05 2023-11-28 北京达佳互联信息技术有限公司 Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN115690281B (en) * 2022-12-29 2023-03-21 海马云(天津)信息技术有限公司 Role expression driving method and device, storage medium and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005038160A (en) * 2003-07-14 2005-02-10 Oki Electric Ind Co Ltd Image generation apparatus, image generating method, and computer readable recording medium
CN108399383A (en) * 2018-02-14 2018-08-14 深圳市商汤科技有限公司 Expression moving method, device storage medium and program

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488234A (en) * 2009-03-02 2009-07-22 中山大学 Facial expression animation synthesizing method based on muscle model
CN101739709A (en) * 2009-12-24 2010-06-16 四川大学 Control method of three-dimensional facial animation
CN103473801B (en) * 2013-09-27 2016-09-14 中国科学院自动化研究所 A kind of human face expression edit methods based on single camera Yu movement capturing data
KR101643573B1 (en) * 2014-11-21 2016-07-29 한국과학기술연구원 Method for face recognition, recording medium and device for performing the method
CN104780339A (en) * 2015-04-16 2015-07-15 美国掌赢信息科技有限公司 Method and electronic equipment for loading expression effect animation in instant video
CN104794444A (en) * 2015-04-16 2015-07-22 美国掌赢信息科技有限公司 Facial expression recognition method in instant video and electronic equipment
CN106775360B (en) * 2017-01-20 2018-11-30 珠海格力电器股份有限公司 Control method, system and the electronic equipment of a kind of electronic equipment
CN108960020A (en) * 2017-05-27 2018-12-07 富士通株式会社 Information processing method and information processing equipment
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device
CN109191507B (en) * 2018-08-24 2019-11-05 北京字节跳动网络技术有限公司 Three-dimensional face images method for reconstructing, device and computer readable storage medium
CN109167914A (en) * 2018-09-25 2019-01-08 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005038160A (en) * 2003-07-14 2005-02-10 Oki Electric Ind Co Ltd Image generation apparatus, image generating method, and computer readable recording medium
CN108399383A (en) * 2018-02-14 2018-08-14 深圳市商汤科技有限公司 Expression moving method, device storage medium and program

Also Published As

Publication number Publication date
CN109978996A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN110189340B (en) Image segmentation method and device, electronic equipment and storage medium
WO2020216025A1 (en) Face display method and apparatus for virtual character, computer device and readable storage medium
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN110929651A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN112911182A (en) Game interaction method, device, terminal and storage medium
CN109522863B (en) Ear key point detection method and device and storage medium
CN109360222B (en) Image segmentation method, device and storage medium
CN110263617B (en) Three-dimensional face model obtaining method and device
CN111028144B (en) Video face changing method and device and storage medium
WO2020233403A1 (en) Personalized face display method and apparatus for three-dimensional character, and device and storage medium
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN110827195A (en) Virtual article adding method and device, electronic equipment and storage medium
CN110956580A (en) Image face changing method and device, computer equipment and storage medium
CN111932604A (en) Method and device for measuring human ear characteristic distance
CN110675473A (en) Method, device, electronic equipment and medium for generating GIF dynamic graph
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN114741559A (en) Method, apparatus and storage medium for determining video cover
CN111753606A (en) Intelligent model upgrading method and device
CN112135191A (en) Video editing method, device, terminal and storage medium
CN113160031A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109345636B (en) Method and device for obtaining virtual face image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant