CN115393562A - Virtual image display method, device, terminal and storage medium - Google Patents

Virtual image display method, device, terminal and storage medium Download PDF

Info

Publication number
CN115393562A
CN115393562A CN202211079191.6A CN202211079191A CN115393562A CN 115393562 A CN115393562 A CN 115393562A CN 202211079191 A CN202211079191 A CN 202211079191A CN 115393562 A CN115393562 A CN 115393562A
Authority
CN
China
Prior art keywords
eyebrow
target
dimensional
contour
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211079191.6A
Other languages
Chinese (zh)
Inventor
邹倩芳
马里千
张国鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202211079191.6A priority Critical patent/CN115393562A/en
Publication of CN115393562A publication Critical patent/CN115393562A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a virtual image display method, a virtual image display device, a terminal and a storage medium, and belongs to the technical field of internet. The method comprises the following steps: acquiring a target eyebrow paste picture and a target three-dimensional face model corresponding to the target face image, wherein eyebrows are not included in the target three-dimensional face model; extracting eyebrow contour information from the target eyebrow map, wherein the eyebrow contour information represents the contours of eyebrows in the target eyebrow map; generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information; and displaying the virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model. The method realizes the conversion from the two-dimensional mapping to the three-dimensional model aiming at the eyebrow independently, thereby displaying the virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model respectively, ensuring that the lines of the eyebrow in the displayed virtual image are smooth, and improving the display effect of the virtual image.

Description

Virtual image display method, device, terminal and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for displaying an avatar.
Background
With the rapid development of computer technology and mobile internet, the stream behavior of the virtual image increases the pleasure of people's life. In the related art, a stylized avatar is generated based on a face image of a user by collecting the face image. However, in the process of generating the virtual image, the two-dimensional face map needs to be attached to the three-dimensional face model, which may cause the eyebrows in the two-dimensional face map to have uneven lines, and further cause the display effect of the virtual image to be poor.
Disclosure of Invention
The present disclosure provides an avatar display method, apparatus, terminal and storage medium, which can improve the avatar display effect. The technical scheme of the disclosure is as follows.
According to a first aspect of an embodiment of the present disclosure, there is provided an avatar display method, including:
acquiring a target eyebrow paste picture and a target three-dimensional face model corresponding to a target face image, wherein eyebrows are not included in the target three-dimensional face model;
extracting eyebrow contour information from the target eyebrow map, wherein the eyebrow contour information represents the contour of the eyebrow in the target eyebrow map;
generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information;
and displaying a virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model.
In some embodiments, the extracting eyebrow contour information from the target eyebrow map includes:
carrying out contour point identification on the target eyebrow map to obtain two-dimensional coordinates of a plurality of target contour points;
determining the eyebrow contour information based on the two-dimensional coordinates of the target contour points, wherein the eyebrow contour information comprises the two-dimensional coordinates of at least three target contour points.
In some embodiments, the determining the eyebrow contour information based on the two-dimensional coordinates of the plurality of target contour points includes:
determining a first brow point, a second brow point and a brow tail point from the plurality of target contour points;
dividing the outline of the target eyebrow map into three eyebrow segments by taking the first eyebrow point, the second eyebrow point and the tail point as the end points of the eyebrow segments respectively, wherein each eyebrow segment comprises two end points and at least one target outline point between the two end points;
respectively determining segment contour information corresponding to each eyebrow segment based on two-dimensional coordinates of the target contour points in each eyebrow segment, wherein first segment contour information corresponding to the first eyebrow segment comprises the first eyebrow point, the second eyebrow point and two-dimensional coordinates of at least one target contour point between the first eyebrow point and the second eyebrow point, second segment contour information corresponding to the second eyebrow segment comprises the eyebrow point, the first eyebrow point and two-dimensional coordinates of at least one target contour point between the eyebrow point and the first eyebrow point, and third segment contour information corresponding to the third eyebrow segment comprises the eyebrow point, the second eyebrow point and two-dimensional coordinates of at least one target contour point between the eyebrow point and the second eyebrow point.
In some embodiments, the determining, based on the two-dimensional coordinates of the target contour point in each of the eyebrow segments, the segment contour information corresponding to each of the eyebrow segments respectively includes:
for any of the eyebrow segments:
determining a first interpolation function corresponding to the eyebrow segment based on the two-dimensional coordinates of each target contour point in the eyebrow segment;
determining two-dimensional coordinates corresponding to at least one first interpolation point by adopting the first interpolation function;
and determining the section contour information corresponding to the eyebrow sections based on the two-dimensional coordinates of the end points in the eyebrow sections and the two-dimensional coordinates corresponding to the at least one first interpolation point.
In some embodiments, the first interpolation function is configured to represent a correspondence between a sequence number of a first interpolation point in the eyebrow segment and two-dimensional coordinates of the first interpolation point, and the determining, by using the first interpolation function, the two-dimensional coordinates corresponding to at least one first interpolation point includes:
determining a first interpolation sequence corresponding to the eyebrow segment, wherein the first interpolation sequence comprises sequence numbers of a plurality of first interpolation points;
and determining the two-dimensional coordinates corresponding to each first interpolation point by adopting the first interpolation function.
In some embodiments, the determining a first interpolation sequence corresponding to the eyebrow segment includes:
determining a maximum distance corresponding to the profile information of a reference segment of the eyebrow segment, wherein the maximum distance is a distance between two end points in the profile information of the reference segment, and the profile information of the reference segment is the profile information of the eyebrow segment in a reference three-dimensional eyebrow model corresponding to a reference eyebrow map;
determining the accumulated distance corresponding to each reference three-dimensional contour point in the reference fragment contour information, wherein the accumulated distance is the distance between the reference three-dimensional contour point and a target end point in the reference fragment contour information;
and determining a first interpolation sequence corresponding to the eyebrow segment based on the ratio of the accumulated distance corresponding to each reference three-dimensional contour point to the maximum distance.
In some embodiments, the generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information includes:
and deforming a reference three-dimensional eyebrow model corresponding to the reference eyebrow paste map based on the eyebrow contour information to obtain the target three-dimensional eyebrow model, so that the eyebrow contour of the target three-dimensional eyebrow model is matched with the eyebrow contour information.
In some embodiments, the eyebrow contour information includes two-dimensional coordinates of at least three target contour points, and the deforming a reference three-dimensional eyebrow model corresponding to a reference eyebrow map based on the eyebrow contour information to obtain the target three-dimensional eyebrow model, so that the eyebrow contour of the target three-dimensional eyebrow model matches the eyebrow contour information includes:
determining three-dimensional coordinates of at least three target three-dimensional contour points based on the eyebrow contour information, the reference eyebrow map and the reference three-dimensional eyebrow model;
and deforming the reference three-dimensional eyebrow model based on the three-dimensional coordinates of each target three-dimensional contour point to obtain the target three-dimensional eyebrow model.
In some embodiments, the eyebrow contour information includes first segment contour information corresponding to a first eyebrow segment, second segment contour information corresponding to a second eyebrow segment, and third segment contour information corresponding to a third eyebrow segment, each segment contour information includes two-dimensional coordinates of at least three target contour points, the two-dimensional coordinates include a first coordinate belonging to a first dimension and a second coordinate belonging to a second dimension;
the reference eyebrow map comprises two-dimensional coordinates of a plurality of reference contour points, the reference three-dimensional eyebrow model comprises three-dimensional coordinates of a plurality of reference three-dimensional contour points, the three-dimensional coordinates comprising a first coordinate belonging to the first dimension, a second coordinate belonging to the second dimension, and a third coordinate belonging to the third dimension;
the determining three-dimensional coordinates of at least three target three-dimensional contour points based on the eyebrow contour information, the reference eyebrow map, and the reference three-dimensional eyebrow model includes:
respectively adjusting the first coordinate and the second coordinate of each target contour point according to the mapping ratio between the reference eyebrow map and the reference three-dimensional eyebrow model;
and determining the first coordinate and the second coordinate of each target contour point after adjustment and the third coordinate of the reference three-dimensional contour point corresponding to each target contour point as the three-dimensional coordinate of one target three-dimensional contour point.
In some embodiments, the adjusting the first coordinate and the second coordinate of each of the target contour points according to the mapping ratio between the reference eyebrow map and the reference three-dimensional eyebrow model includes:
determining a first distance and a second distance corresponding to the reference eyebrow map, wherein the first distance is a difference value of first coordinates of two end points of the plurality of reference contour points, and the second distance is a difference value of second coordinates of the two end points of the plurality of reference contour points;
determining a third distance and a fourth distance corresponding to the reference three-dimensional eyebrow model, wherein the third distance is a difference value of first coordinates of two end points of the plurality of reference three-dimensional contour points, and the fourth distance is a difference value of second coordinates of the two end points of the plurality of reference three-dimensional contour points;
respectively determining a first proportion and a second proportion, wherein the first proportion is the proportion between the first distance and the third distance, and the second proportion is the proportion between the second distance and the fourth distance;
and adjusting the first coordinate of each target contour point based on the first proportion, and adjusting the second coordinate of each target contour point based on the second proportion.
In some embodiments, the eyebrow contour information includes two-dimensional coordinates of the target contour points in a plurality of eyebrow segments, and the deforming the reference three-dimensional eyebrow model based on the three-dimensional coordinates of each of the target three-dimensional contour points to obtain the target three-dimensional eyebrow model includes:
for any of the eyebrow segments:
determining a second interpolation function corresponding to the eyebrow segment based on the three-dimensional coordinates of each target three-dimensional contour point in the eyebrow segment;
determining a three-dimensional coordinate corresponding to at least one second interpolation point by adopting the second interpolation function;
and deforming the reference three-dimensional eyebrow model based on the end points in each eyebrow segment and the three-dimensional coordinates corresponding to the second interpolation point to obtain the target three-dimensional eyebrow model.
In some embodiments, the second interpolation function is configured to represent a correspondence between a serial number of a second interpolation point in the eyebrow segment and three-dimensional coordinates of the second interpolation point, and the determining, using the second interpolation function, the three-dimensional coordinates corresponding to at least one second interpolation point includes:
determining a second interpolation sequence corresponding to the eyebrow segment, wherein the second interpolation sequence comprises sequence numbers of a plurality of second interpolation points;
and determining the three-dimensional coordinates corresponding to each second interpolation point by adopting the second interpolation function.
In some embodiments, the target three-dimensional eyebrow model includes three-dimensional coordinates of a plurality of target three-dimensional contour points, the three-dimensional coordinates including a first coordinate belonging to a first dimension, a second coordinate belonging to a second dimension, and a third coordinate belonging to a third dimension;
the displaying of the virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model includes:
respectively adjusting a third coordinate of each target three-dimensional contour point according to a mapping ratio between a reference three-dimensional eyebrow model and the target three-dimensional eyebrow model, wherein the reference three-dimensional eyebrow model comprises three-dimensional coordinates of a plurality of reference three-dimensional contour points;
and displaying a virtual image corresponding to the target face image based on the adjusted target three-dimensional eyebrow model and the adjusted target three-dimensional face model.
In some embodiments, the adjusting the third coordinate of each of the target three-dimensional contour points according to the mapping ratio between the reference three-dimensional eyebrow model and the target three-dimensional eyebrow model includes:
respectively determining a fifth distance corresponding to the target three-dimensional eyebrow model and a sixth distance corresponding to the reference three-dimensional eyebrow model, wherein the fifth distance is a difference value of third coordinates of two end points of the plurality of target three-dimensional contour points, and the sixth distance is a difference value of third coordinates of two end points of the plurality of reference three-dimensional contour points;
determining a third ratio, the third ratio being a ratio between the fifth distance and the sixth distance;
and adjusting the third coordinate of each target three-dimensional contour point based on the third proportion.
In some embodiments, obtaining the target eyebrow map comprises:
acquiring a target face image;
generating the target face map based on the target face image;
and extracting the target eyebrow map from the target face map.
According to a second aspect of the embodiments of the present disclosure, there is provided an avatar display apparatus, the apparatus including:
the acquisition unit is configured to acquire a target eyebrow patch and a target three-dimensional face model corresponding to a target face image, wherein eyebrows are not included in the target three-dimensional face model;
an extracting unit configured to extract eyebrow contour information from the target eyebrow map, the eyebrow contour information characterizing contours of eyebrows in the target eyebrow map;
a generating unit configured to execute generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information;
and the display unit is configured to display the virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model.
In some embodiments, the extraction unit includes:
the identification subunit is configured to perform contour point identification on the target eyebrow map to obtain two-dimensional coordinates of a plurality of target contour points;
a determining subunit configured to perform determining the eyebrow contour information based on the two-dimensional coordinates of the plurality of target contour points, the eyebrow contour information including two-dimensional coordinates of at least three target contour points.
In some embodiments, the determining subunit is configured to perform:
determining a first brow point, a second brow point and a brow tail point from the plurality of target contour points;
dividing the outline of the target eyebrow map into three eyebrow segments by taking the first eyebrow point, the second eyebrow point and the tail point as end points of the eyebrow segments respectively, wherein each eyebrow segment comprises two end points and at least one target outline point between the two end points;
respectively determining segment contour information corresponding to each eyebrow segment based on two-dimensional coordinates of the target contour points in each eyebrow segment, wherein first segment contour information corresponding to the first eyebrow segment comprises the first eyebrow point, the second eyebrow point and two-dimensional coordinates of at least one target contour point between the first eyebrow point and the second eyebrow point, second segment contour information corresponding to the second eyebrow segment comprises the eyebrow point, the first eyebrow point and two-dimensional coordinates of at least one target contour point between the eyebrow point and the first eyebrow point, and third segment contour information corresponding to the third eyebrow segment comprises the eyebrow point, the second eyebrow point and two-dimensional coordinates of at least one target contour point between the eyebrow point and the second eyebrow point.
In some embodiments, the determining subunit is configured to perform:
for any of the eyebrow segments:
determining a first interpolation function corresponding to the eyebrow segment based on the two-dimensional coordinates of each target contour point in the eyebrow segment;
determining two-dimensional coordinates corresponding to at least one first interpolation point by adopting the first interpolation function;
and determining the segment contour information corresponding to the eyebrow segment based on the two-dimensional coordinates of the end points in the eyebrow segment and the two-dimensional coordinates corresponding to the at least one first interpolation point.
In some embodiments, the first interpolation function is used to represent a correspondence between a sequence number of a first interpolation point in the eyebrow segment and two-dimensional coordinates of the first interpolation point, and the determining subunit is configured to perform:
determining a first interpolation sequence corresponding to the eyebrow segment, wherein the first interpolation sequence comprises sequence numbers of a plurality of first interpolation points;
and determining the two-dimensional coordinates corresponding to each first interpolation point by adopting the first interpolation function.
In some embodiments, the determining subunit is configured to perform:
determining a maximum distance corresponding to reference segment contour information of the eyebrow segment, wherein the maximum distance is a distance between two end points in the reference segment contour information, and the reference segment contour information is the contour information of the eyebrow segment in a reference three-dimensional eyebrow model corresponding to a reference eyebrow map;
determining the accumulated distance corresponding to each reference three-dimensional contour point in the reference fragment contour information, wherein the accumulated distance is the distance between the reference three-dimensional contour point and a target end point in the reference fragment contour information;
and determining a first interpolation sequence corresponding to the eyebrow segment based on the ratio of the accumulated distance corresponding to each reference three-dimensional contour point to the maximum distance.
In some embodiments, the generating unit includes:
and the deformation subunit is configured to perform deformation on a reference three-dimensional eyebrow model corresponding to the reference eyebrow map based on the eyebrow contour information to obtain the target three-dimensional eyebrow model, so that the eyebrow contour of the target three-dimensional eyebrow model is matched with the eyebrow contour information.
In some embodiments, the eyebrow contour information includes two-dimensional coordinates of at least three target contour points, and the warping subunit is configured to perform:
determining three-dimensional coordinates of at least three target three-dimensional contour points based on the eyebrow contour information, the reference eyebrow map and the reference three-dimensional eyebrow model;
and deforming the reference three-dimensional eyebrow model based on the three-dimensional coordinates of each target three-dimensional contour point to obtain the target three-dimensional eyebrow model.
In some embodiments, the eyebrow contour information includes first segment contour information corresponding to a first eyebrow segment, second segment contour information corresponding to a second eyebrow segment, and third segment contour information corresponding to a third eyebrow segment, each segment contour information includes two-dimensional coordinates of at least three target contour points, the two-dimensional coordinates include a first coordinate belonging to a first dimension and a second coordinate belonging to a second dimension;
the reference eyebrow map comprises two-dimensional coordinates of a plurality of reference contour points, the reference three-dimensional eyebrow model comprises three-dimensional coordinates of a plurality of reference three-dimensional contour points, the three-dimensional coordinates comprising a first coordinate belonging to the first dimension, a second coordinate belonging to the second dimension, and a third coordinate belonging to the third dimension;
the morphing subunit configured to perform:
respectively adjusting the first coordinate and the second coordinate of each target contour point according to the mapping ratio between the reference eyebrow map and the reference three-dimensional eyebrow model;
and determining the first coordinate and the second coordinate of each target contour point after adjustment and the third coordinate of the reference three-dimensional contour point corresponding to each target contour point as the three-dimensional coordinate of one target three-dimensional contour point.
In some embodiments, the morphing subunit is configured to perform:
determining a first distance and a second distance corresponding to the reference eyebrow map, wherein the first distance is a difference value of first coordinates of two end points of the plurality of reference contour points, and the second distance is a difference value of second coordinates of the two end points of the plurality of reference contour points;
determining a third distance and a fourth distance corresponding to the reference three-dimensional eyebrow model, wherein the third distance is a difference value of first coordinates of two end points of the plurality of reference three-dimensional contour points, and the fourth distance is a difference value of second coordinates of the two end points of the plurality of reference three-dimensional contour points;
respectively determining a first proportion and a second proportion, wherein the first proportion is the proportion between the first distance and the third distance, and the second proportion is the proportion between the second distance and the fourth distance;
and adjusting the first coordinate of each target contour point based on the first proportion, and adjusting the second coordinate of each target contour point based on the second proportion.
In some embodiments, the eyebrow contour information includes two-dimensional coordinates of the target contour point in a plurality of eyebrow segments, and the deformation subunit is configured to perform:
for any of the eyebrow segments:
determining a second interpolation function corresponding to the eyebrow segment based on the three-dimensional coordinates of each target three-dimensional contour point in the eyebrow segment;
determining a three-dimensional coordinate corresponding to at least one second interpolation point by adopting the second interpolation function;
and deforming the reference three-dimensional eyebrow model based on the end points in each eyebrow segment and the three-dimensional coordinates corresponding to the second interpolation point to obtain the target three-dimensional eyebrow model.
In some embodiments, the second interpolation function is used to represent a correspondence between a sequence number of a second interpolation point in the eyebrow segment and three-dimensional coordinates of the second interpolation point, and the deformation subunit is configured to perform:
determining a second interpolation sequence corresponding to the eyebrow segment, wherein the second interpolation sequence comprises sequence numbers of a plurality of second interpolation points;
and determining the three-dimensional coordinates corresponding to each second interpolation point by adopting the second interpolation function.
In some embodiments, the target three-dimensional eyebrow model includes three-dimensional coordinates of a plurality of target three-dimensional contour points, the three-dimensional coordinates including a first coordinate belonging to a first dimension, a second coordinate belonging to a second dimension, and a third coordinate belonging to a third dimension;
the display unit includes:
an adjusting subunit, configured to perform adjustment of the third coordinates of each of the target three-dimensional contour points according to a mapping ratio between a reference three-dimensional eyebrow model and the target three-dimensional eyebrow model, respectively, the reference three-dimensional eyebrow model including three-dimensional coordinates of a plurality of reference three-dimensional contour points;
and the display subunit is configured to display an avatar corresponding to the target human face image based on the adjusted target three-dimensional eyebrow model and the target three-dimensional human face model.
In some embodiments, the adjusting subunit is configured to perform:
respectively determining a fifth distance corresponding to the target three-dimensional eyebrow model and a sixth distance corresponding to the reference three-dimensional eyebrow model, wherein the fifth distance is a difference value of third coordinates of two end points of the plurality of target three-dimensional contour points, and the sixth distance is a difference value of third coordinates of two end points of the plurality of reference three-dimensional contour points;
determining a third ratio, the third ratio being a ratio between the fifth distance and the sixth distance;
and adjusting the third coordinate of each target three-dimensional contour point based on the third proportion.
In some embodiments, the obtaining unit is configured to perform:
acquiring a target face image;
generating the target face map based on the target face image;
and extracting the target eyebrow map from the target face map.
According to a third aspect of the embodiments of the present disclosure, there is provided a terminal, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the avatar display method as described in the above aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium in which instructions, when executed by a processor of a terminal, enable the terminal to perform the avatar display method as described in the above aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the avatar display method as described in the above aspect.
In the embodiment of the disclosure, the eyebrow contour information is extracted from the two-dimensional target eyebrow map, and the eyebrow contour information can represent the outline of the eyebrow, so that the target three-dimensional eyebrow model corresponding to the target eyebrow map is generated according to the extracted eyebrow contour information, the conversion from the two-dimensional map to the three-dimensional model aiming at the eyebrow is realized, the virtual image corresponding to the target human face image can be displayed based on the target three-dimensional eyebrow model and the target three-dimensional human face model respectively, the lines of the eyebrow in the displayed virtual image are smooth, and the display effect of the virtual image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating an avatar display method in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating another avatar display method in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a target eyebrow map in accordance with an exemplary embodiment;
FIG. 4 is a diagram illustrating an eyebrow overlay and a three-dimensional eyebrow model, in accordance with an exemplary embodiment;
FIG. 5 is a diagram illustrating a target face model and a target three-dimensional eyebrow model, according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating an avatar display flow according to an exemplary embodiment;
fig. 7 is a block diagram illustrating a structure of an avatar display apparatus according to an exemplary embodiment;
fig. 8 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The user information to which the present disclosure relates may be information authorized by the user or sufficiently authorized by each party.
The disclosed embodiment provides an avatar display method, which is executed by a terminal. In some embodiments, the terminal is a laptop, a cell phone, a tablet, or other terminal.
The virtual image display method provided by the disclosure can be applied to scenes displayed by virtual images. An application scenario of the embodiment of the present disclosure is explained below.
For example, when a user sets a self avatar in a terminal, the terminal collects a face image of the user, and the avatar display method provided by the embodiment of the disclosure displays the avatar corresponding to the face image of the user, so that a stylized avatar according with the user's intention is realized, and the avatar display effect is good.
The avatar display method provided by the embodiment of the present disclosure can also be applied to other scenes, and the embodiment of the present disclosure does not limit this.
Fig. 1 is a flowchart illustrating an avatar display method according to an exemplary embodiment, which is performed by a terminal, as shown in fig. 1, and includes the following steps.
In step 101, the terminal obtains a target eyebrow map and a target three-dimensional face model corresponding to the target face image, where the target three-dimensional face model does not include eyebrows.
The target face image is a face image of a user, and the user is a user using the terminal. In some embodiments, the target face image is acquired by a terminal. The target eyebrow map is a two-dimensional map containing eyebrows, and is extracted from a target face map, which is a two-dimensional map containing a face, and includes maps at least one position of eyebrows, eyes, nose, mouth, ears, and the like. The target three-dimensional face model is used for representing the three-dimensional outline of the face, and the target face model does not contain the five sense organs of the face, namely eyebrows.
In the related technology, a terminal acquires a two-dimensional face map and a three-dimensional face model corresponding to a face image, then the two-dimensional face map is pasted on the three-dimensional face model, and an avatar is displayed based on the two-dimensional face map and the three-dimensional face model.
In the embodiment of the disclosure, the terminal separately obtains the target eyebrow map corresponding to the target face image, processes the target eyebrow map, generates the target three-dimensional eyebrow model corresponding to the target eyebrow map, realizes the construction of the three-dimensional model for the eyebrows separately, and displays the virtual image based on the target three-dimensional eyebrow model and the target three-dimensional face model.
In some embodiments, when the user triggers the terminal to display the virtual image, the terminal acquires a face image of the user to obtain a target face image, and then acquires a target eyebrow paste image and a target three-dimensional face model corresponding to the target face image, so as to display the virtual image.
In step 102, the terminal extracts eyebrow contour information from the target eyebrow map, wherein the eyebrow contour information represents the contours of the eyebrows in the target eyebrow map.
Wherein, the eyebrow is represented in the target eyebrow map in the form of eyebrow-shaped color blocks, and the eyebrow contour information represents the contour of the eyebrow in the target eyebrow map, which represents the shape of the eyebrow and the position of the eyebrow.
In step 103, the terminal generates a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information.
The target three-dimensional eyebrow model is a three-dimensional model of eyebrows, and can represent the outline of the eyebrows in a three-dimensional space.
In the embodiment of the disclosure, the eyebrow contour information is extracted from the two-dimensional target eyebrow map, the eyebrow contour information can represent the eyebrow contour, and then the target three-dimensional eyebrow model corresponding to the target eyebrow map is generated according to the extracted eyebrow contour information, so that the conversion from the two-dimensional map to the three-dimensional model aiming at the eyebrow is realized.
In step 104, the terminal displays a virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model.
Wherein the displayed virtual image is a stylized image of the target face image.
In the embodiment of the disclosure, the eyebrow contour information is extracted from the two-dimensional target eyebrow map, and the eyebrow contour information can represent the eyebrow contour, so that the target three-dimensional eyebrow model corresponding to the target eyebrow map is generated according to the extracted eyebrow contour information, and the conversion from the two-dimensional map to the three-dimensional model for the eyebrow is realized, so that the virtual image corresponding to the target face image can be displayed based on the target three-dimensional eyebrow model and the target three-dimensional face model, the lines of the eyebrow in the displayed virtual image are smooth, and the display effect of the virtual image is improved.
Fig. 2 is a flowchart illustrating an avatar display method according to an exemplary embodiment, which is performed by a terminal, as shown in fig. 2, and includes the following steps.
In step 201, the terminal obtains a target eyebrow map and a target three-dimensional face model corresponding to the target face image, where the target three-dimensional face model does not include eyebrows.
The target face image is a face image of a user, and the user is a user using the terminal. The target eyebrow map is a two-dimensional map containing eyebrows. The target three-dimensional face model is used for representing the three-dimensional outline of the face, and the target face model does not contain the five sense organs of the face, namely eyebrows.
In some embodiments, the target eyebrow map is extracted from the target face map, and accordingly, an implementation manner of obtaining the target eyebrow map corresponding to the target face image includes: acquiring a target face image; generating a target face map based on the target face image; and extracting a target eyebrow map from the target face map.
The target face map is a two-dimensional map containing a face, and the target face map comprises maps on at least one position of eyebrows, eyes, a nose, a mouth, ears and the like. In a possible implementation manner of the embodiment, when the user triggers the terminal to display the virtual image, the terminal acquires a face image of the user to obtain a target face image, and then generates the target face map. In this implementation, after the target face image is acquired, the terminal further generates a target three-dimensional face model based on the target face image.
The position of the eyebrow in the target face map is fixed, and accordingly, the terminal stores the position of the eyebrow in the target face map. And after the target face map is acquired, extracting the target eyebrow map at the position from the target face map. Wherein the position may be represented in the form of a coordinate area. And extracting two target eyebrow maps because the target face map comprises two eyebrows. Because the two eyebrows are symmetrical, the embodiment of the present disclosure takes one target eyebrow map as an example for description, and the other target eyebrow map is similar to the above, and the embodiment of the present disclosure is not described again.
It should be noted that, after the target eyebrow map is extracted from the target face map, the positions of the eyebrows in the target face map may be filled with skin to ensure the integrity of the target face map.
In the embodiment of the present disclosure, considering that eyebrows in the target face map need to be taken out separately, the target eyebrow map is extracted from the target face map, so that subsequent processing can be performed on the target eyebrow map, the data amount of the processed target eyebrow map is smaller than that of the processed target face map, and influence on other areas in the target face map is avoided.
In step 202, the terminal identifies contour points of the target eyebrow map to obtain two-dimensional coordinates of a plurality of target contour points.
The target eyebrow paste comprises eyebrows and skin around the eyebrows, and the outline of the eyebrows is in a shape surrounded by the boundaries of the eyebrows. The terminal can identify the contour points of the target eyebrow map, and the contour points on the boundary of the eyebrows are identified, namely the target contour points, so that the two-dimensional coordinates of the target contour points are obtained. Wherein the two-dimensional coordinates include a first coordinate belonging to a first dimension and a second coordinate belonging to a second dimension. In some embodiments, the terminal calls a contour point identification function to perform contour point identification on the target eyebrow map. The contour point recognition function may be set as desired, such as a findContours function or other functions.
If the eyebrows are closer to the skin around the eyebrows in color, it is difficult to distinguish the outlines of the eyebrows, and it is difficult to identify the outline points of the target eyebrow map. In some embodiments, before performing contour point identification on the target eyebrow map, the terminal performs color adjustment on the target eyebrow map, and then performs contour point identification on the target eyebrow map after the color adjustment. The colors of the eyebrows in the target eyebrow map after color adjustment are different from the colors of the skin around the eyebrows, so that contour point identification is convenient for the target eyebrow map, and the accuracy of contour point identification is improved.
In a possible implementation manner of this embodiment, an implementation manner of performing color adjustment on the target eyebrow map includes: setting the gray value of the pixel point of the eyebrow area in the target eyebrow map as a first gray value, and setting the gray value of the pixel point of the skin area in the target eyebrow map as a second gray value, wherein the first gray value and the second gray value are different and can be set according to requirements, for example, the first gray value is a gray value corresponding to white, and the second gray value is a gray value corresponding to black. The eyebrow region and the skin region are distinguished by setting pixel points in the eyebrow region and the skin region to different gray values, and the eyebrow region can be extracted.
For example, referring to fig. 3, after the target eyebrow map is extracted, the eyebrow region is extracted from the target eyebrow map, and then contour points are identified, in fig. 3, black dots on the eyebrows in the top row from left to right of the 3 rd target eyebrow map represent the identified target contour points.
After contour point identification is carried out on the target eyebrow paste picture to obtain two-dimensional coordinates of a plurality of target contour points, eyebrow contour information can be determined based on the two-dimensional coordinates of the plurality of target contour points, the eyebrow contour information represents the eyebrow contour in the target eyebrow paste picture, the eyebrow contour information comprises the two-dimensional coordinates of at least three target contour points, and therefore the positions of the eyebrow contour are determined by the at least three target contour points. The specific process of determining the eyebrow contour information based on the two-dimensional coordinates of the plurality of target contour points is described in the following steps 203-205. Of course, the following steps 203-205 are merely exemplary, and other ways of determining the eyebrow contour information are also possible with embodiments of the present disclosure.
In the embodiment of the disclosure, contour point identification is performed on the target eyebrow map to obtain two-dimensional coordinates of a plurality of target contour points, so that more contour boundary information is obtained, and then eyebrow contour information is determined based on the two-dimensional coordinates of the plurality of target contour points, so that the determined eyebrow contour information is richer and can accurately represent eyebrow.
In step 203, the terminal determines a first brow point, a second brow point and a brow tail point from the plurality of target contour points.
The plurality of target contour points comprise inflection points in the eyebrows and two end points of line segments. Therefore, the plurality of target contour points contain more detailed contour boundary information, but do not contain semantic information related to eyebrows. Correspondingly, the terminal determines the outline points of the eyebrow head and the eyebrow tail from a plurality of target outline points, wherein the eyebrow head corresponds to an upper outline point and a lower outline point: first brow head point and second brow head point, brow tail correspond a contour point: and (5) point of tail of eyebrow.
The two-dimensional coordinates of the first eyebrow point, the second eyebrow point and the tail eyebrow point can represent semantic information related to the eyebrows, and the first eyebrow point, the second eyebrow point and the tail eyebrows can be regarded as three key contour points of the eyebrows. Such as the position of the tail of the eyebrow and the position of the head of the eyebrow. Taking the eyebrows in the target eyebrow paste picture as the left eyebrows in the human face as an example, the eyebrow tail points are the leftmost contour points in the plurality of target contour points, the first eyebrow head points are contour points on the upper right of the eyebrows, and the second eyebrow head points are contour points on the lower right of the eyebrows.
Referring to fig. 3, in the fourth target eyebrow map from left to right in the upper row, the contour point located at the leftmost side among the plurality of target contour points, that is, the first contour point intersecting the vertical line at the left side, is the tail point of the eyebrow; the first eyebrow point is positioned at the upper right of the eyebrow, namely a first contour point which is intersected with a first oblique line at the upper right, and the first oblique line is a straight line obtained by rotating a vertical line by 45 degrees along the counterclockwise direction; the second brow point is located at the lower right of the eyebrow, i.e., the first contour point intersecting with a second oblique line, which is a straight line obtained by rotating the vertical line by 45 ° clockwise, at the lower right. In fig. 3, the points shown in the upper row from the left to the fifth target eyebrow pasting map are the extracted 3 key contour points.
In step 204, the terminal divides the outline of the target eyebrow map into three eyebrow segments by using the first eyebrow point, the second eyebrow point and the tail point as the end points of the eyebrow segments respectively, wherein each eyebrow segment comprises two end points and at least one target outline point between the two end points.
After obtaining three first brow points, second brow points and brow points, the brow can be divided into three brow segments, one brow segment from the brow point to the first brow point, one brow segment from the first brow point to the second brow point, and one brow segment from the second brow point to the brow point.
In some embodiments, a first set of contour points, a second set of contour points, and a third set of contour points are first determined based on the two-dimensional coordinates of the identified plurality of target contour points. The first contour point set comprises a first brow point, a second brow point and two-dimensional coordinates of each target contour point between the first brow point and the second brow point, the second contour point set comprises a first brow tail point, a first brow head point and two-dimensional coordinates of each target contour point between the first brow tail point and the first brow head point, the third contour point set comprises a first brow tail point, a second brow head point and two-dimensional coordinates of each target contour point between the first brow tail point and the second brow head point, and each contour point set refers to an eyebrow segment.
The step divides a plurality of target contour points into three contour point sets, and the contour point sets of two adjacent eyebrow segments comprise segment boundary target contour points.
Referring to fig. 3, the eyebrow can be divided into three eyebrow segments, i.e. an upper edge of the eyebrow, an eyebrow, and a lower edge of the eyebrow, taking the eyebrow segment from the tail point of the eyebrow to the first brow point as an example, the coordinates of the target contour points in the second contour point set are arranged in the order from the tail point to the first brow point: c. C 0 ,c 1 ,…,c n
In step 205, the terminal determines segment contour information corresponding to each eyebrow segment based on the two-dimensional coordinates of the target contour point in each eyebrow segment.
The eyebrow contour information comprises first segment contour information, second segment contour information and third segment contour information; each piece of section contour information refers to an eyebrow section, first section contour information corresponding to the first eyebrow section comprises a first eyebrow point, a second eyebrow point and two-dimensional coordinates of at least one target contour point between the first eyebrow point and the second eyebrow point, second section contour information corresponding to the second eyebrow section comprises two-dimensional coordinates of an eyebrow tail point, the first eyebrow point and at least one target contour point between the eyebrow tail point and the first eyebrow point, and third section contour information corresponding to the third eyebrow section comprises two-dimensional coordinates of the eyebrow tail point, the second eyebrow point and at least one target contour point between the eyebrow tail point and the second eyebrow point.
In the embodiment of the disclosure, contour point recognition is performed on a target eyebrow map to obtain two-dimensional coordinates of a plurality of target contour points, so that more contour boundary information is obtained, and then a first eyebrow point, a second eyebrow point and an eyebrow tail point with semantic information are extracted from the contour boundary information, so that eyebrows can be divided into three eyebrow segments, the determined eyebrow contour information comprises segment contour information indicating each eyebrow segment, and the determined eyebrow contour information is rich and can accurately represent the eyebrows.
Since the extracted positions of the plurality of target contour points on the eyebrows may not be uniform, and the subsequent processing of the target eyebrow map by directly using the plurality of target contour points may affect the accuracy of the processing, the eyebrow contour information may be further determined based on the two-dimensional coordinates of the plurality of target contour points, and in some embodiments, the implementation manner of step 205 includes the following steps 2051-2053.
2051. For any one of the eyebrow segments, a first interpolation function corresponding to the eyebrow segment is determined based on the two-dimensional coordinates of each target contour point in the eyebrow segment.
The eyebrow segment comprises at least three target contour points, interpolation can be carried out based on two-dimensional coordinates of each target contour point, and the obtained first interpolation function can represent the distribution situation of the target contour points in the eyebrow segment. The definition domain of the first interpolation function is a preset definition domain, and each target contour point is represented by a corresponding value in the preset definition domain. The preset definition field can be set according to the requirement, for example, the preset definition field is [0,1]. And then, keeping the two end points of the eyebrow segment unchanged, and selecting first interpolation points which are uniformly distributed from the eyebrow segment to serve as target contour points of the eyebrow segment without considering other target contour points.
In some embodiments, the first interpolation function is used to represent a correspondence between a sequence number of a first interpolation point in the eyebrow segment and two-dimensional coordinates of the first interpolation point. The first interpolation points are extracted from the eyebrow contour by adopting a first interpolation function, and the sequence number of each first interpolation point refers to the sequence of each first interpolation point in all the first interpolation points.
For example, suppose that in the third target eyebrow map from left to right in the upper row of FIG. 3, the target contour points are arranged in the order from the tail to the head of the eyebrow, x 0 ,x 1 ,…,x n Then by interpolation, a domain of definition [0,1] can be obtained]The interpolation function f of (a) satisfies the following relationship.
Figure BDA0003832242610000161
Wherein n represents interpolation density, namely the number of interpolation points needing to be selected, and n is any numerical value, so that uniform contour points can be obtained by interpolation with any density.
2052. And determining the two-dimensional coordinates corresponding to the at least one first interpolation point by adopting the first interpolation function.
After the first interpolation function is determined, uniform first interpolation points need to be selected from the outline of the eyebrow segment by adopting the first interpolation function, and therefore the two-dimensional coordinates corresponding to at least one first interpolation point are obtained.
The sequence number of the first interpolation point is an independent variable of the first interpolation function, the terminal calls the first interpolation function, calculation is carried out based on the independent variable and the first interpolation function, and a dependent variable of the first interpolation function, namely a two-dimensional coordinate corresponding to the first interpolation point, is obtained.
For example, the first interpolation function is
Figure BDA0003832242610000162
i is a positive integer, if 5 first interpolation points are to be extracted from the eyebrow segment, n =5, and the two-dimensional coordinates corresponding to the extracted 5 first interpolation points are:
Figure BDA0003832242610000163
in some embodiments, a first interpolation sequence corresponding to the eyebrow segment is determined, the first interpolation sequence including a plurality of sequence numbers of the first interpolation points. And determining a two-dimensional coordinate corresponding to each first interpolation point by adopting a first interpolation function, wherein the two-dimensional coordinate is used for expressing a target contour point as the first interpolation point is the target contour point to be selected from the eyebrow segment. The target contour points obtained by contour recognition before are not considered.
In some embodiments, an implementation of determining a first interpolation sequence corresponding to an eyebrow segment includes: determining the maximum distance corresponding to the profile information of a reference segment of the eyebrow segment, wherein the maximum distance is the distance between two end points in the profile information of the reference segment, and the profile information of the reference segment is the profile information of the eyebrow segment in a reference three-dimensional eyebrow model corresponding to a reference eyebrow map; determining the accumulated distance corresponding to each reference three-dimensional contour point in the reference fragment contour information, wherein the accumulated distance is the distance between the reference three-dimensional contour point and a target end point in the reference fragment contour information; and determining a first interpolation sequence corresponding to the eyebrow segment based on the ratio of the accumulated distance corresponding to each reference three-dimensional contour point to the maximum distance.
The reference eyebrow map and the reference three-dimensional eyebrow model are used for providing reference for determining the target three-dimensional eyebrow model, and the reference eyebrow map and the reference three-dimensional eyebrow model are stored in the terminal.
The reference three-dimensional eyebrow model comprises first reference segment contour information, second reference segment contour information and third reference segment contour information, each reference segment contour information comprises three-dimensional coordinates of at least three reference three-dimensional contour points, each reference segment contour information refers to an eyebrow segment, and the three-dimensional coordinates comprise a first coordinate belonging to a first dimension, a second coordinate belonging to a second dimension and a third coordinate belonging to a third dimension. The first reference fragment contour information comprises a first brow point, a second brow point and three-dimensional coordinates of each reference three-dimensional contour point between the first brow point and the second brow point, the second reference fragment contour information comprises a brow tail point, the first brow head point and the three-dimensional coordinates of each reference three-dimensional contour point between the brow tail point and the first brow head point, and the third reference three-dimensional contour information comprises the brow tail point, the second brow head point and the three-dimensional coordinates of each reference three-dimensional contour point between the brow tail point and the second brow head point.
In some embodiments, the distance between the two endpoints is calculated from the three-dimensional coordinates of the two endpoints. For example, taking an eyebrow segment as an upper edge of the eyebrow, the three-dimensional coordinates of the 5 reference three-dimensional contour points included in the contour information of the second reference segment are c 0 ,c 1 ,…,c p Wherein p =4, and the cumulative distance corresponding to each reference three-dimensional contour point is shown in the following formula.
d 0 =0;d 1 =d 0 +|c 1 -c 0 |;…;d p =d p-1 +|c p -c p-1 |。
Accordingly, the first interpolation sequence includes a plurality of first interpolation points as shown in the following formulas, respectively.
Figure BDA0003832242610000171
Wherein d is p The maximum distance corresponding to the profile information of the reference segment of the eyebrow segment.
In the embodiment of the disclosure, since the three-dimensional coordinates of the plurality of reference three-dimensional contour points in the reference three-dimensional eyebrow model are known and the distances between the plurality of reference three-dimensional contour points are set uniformly, the first interpolation sequence is determined based on the maximum distances corresponding to the plurality of reference three-dimensional contour points and the accumulated distance corresponding to each reference three-dimensional contour point, so that the distances between the plurality of interpolation points in the determined first interpolation sequence are uniform, and the uniformity of the first interpolation sequence is improved.
2053. And determining the section contour information corresponding to the eyebrow sections based on the two-dimensional coordinates of the end points in the eyebrow sections and the two-dimensional coordinates corresponding to the at least one first interpolation point.
The eyebrow segments comprise two end points and a first interpolation point located between the two end points, the points are at least three target contour points of the eyebrow segments, and two-dimensional coordinates of the at least three target contour points corresponding to the eyebrow segments form segment contour information corresponding to the eyebrow segments.
In the embodiment of the present disclosure, since the first interpolation function is determined based on the two-dimensional coordinates of at least three target contour points of the eyebrow segment, the first interpolation function can represent the distribution of the target contour points on the contour of the eyebrow segment, and the two-dimensional coordinates determined based on the first interpolation function and the first interpolation points are also located on the eyebrow segment, so the first interpolation points are also contour points on the eyebrow segment, and the positions of the plurality of first interpolation points determined by interpolation are relatively uniform, and the determined segment contour information can relatively accurately represent the contour of the eyebrow segment.
In some embodiments, steps 202-205 are one implementation of the terminal extracting eyebrow contour information from the target eyebrow map. The embodiment of the disclosure can also adopt other modes to extract the eyebrow contour information.
It should be noted that the number of the plurality of target contour points corresponding to each eyebrow segment can be implemented by setting the number of reference three-dimensional contour points corresponding to each eyebrow segment included in the reference three-dimensional eyebrow model. For example, referring to fig. 3, the number of target contour points is 5 in the case of the upper edge of the eyebrow in the 1 st target eyebrow map from left to right in the following row, 10 in the case of the upper edge of the eyebrow in the 2 nd target eyebrow map from left to right in the following row, and 20 in the case of the upper edge of the eyebrow in the 3 rd target eyebrow map from left to right in the following row; referring to the eyebrows in the bottom row of fig. 3 from left to right in the 4 th and 5 th target eyebrow maps, a smaller number of target contour points and a larger number of target contour points are shown, respectively.
In step 206, the terminal generates a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information.
In the disclosed embodiment, a three-dimensional model of the eyebrow, i.e. a target three-dimensional eyebrow model, is generated separately. In some embodiments, the implementation of generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information includes: and based on the eyebrow contour information, deforming the reference three-dimensional eyebrow model corresponding to the reference eyebrow paste to obtain a target three-dimensional eyebrow model, so that the eyebrow contour of the target three-dimensional eyebrow model is matched with the eyebrow contour information. For example, referring to fig. 4, the reference eyebrow map is shown on the left side of the upper row, the target eyebrow map is shown on the right side, the reference three-dimensional eyebrow model is shown on the left side of the lower row, and the target three-dimensional eyebrow model is shown on the lower row.
In the embodiment of the present disclosure, the reference three-dimensional eyebrow model is used to provide a reference, and the eyebrow contour information is used as an eyebrow contour target to deform the reference three-dimensional eyebrow model, so that the eyebrow contour of the target three-dimensional eyebrow model obtained by deformation matches with the eyebrow contour information, and the target three-dimensional eyebrow model is the target three-dimensional eyebrow model corresponding to the target eyebrow map. The mode of generating the three-dimensional model can greatly reduce the processing amount and improve the efficiency of generating the three-dimensional model by means of the prior information provided in the reference three-dimensional eyebrow model.
In some embodiments, based on the eyebrow contour information, the reference three-dimensional eyebrow model corresponding to the reference eyebrow map is deformed to obtain the target three-dimensional eyebrow model, and the implementation manner of matching the eyebrow contour of the target three-dimensional eyebrow model with the eyebrow contour information includes the following steps 2061-2062.
2061. And determining three-dimensional coordinates of at least three target three-dimensional contour points based on the eyebrow contour information, the reference eyebrow map and the reference three-dimensional eyebrow model.
The eyebrow contour information comprises first section contour information corresponding to a first eyebrow section, second section contour information corresponding to a second eyebrow section and third section contour information corresponding to a third eyebrow section, each section contour information comprises two-dimensional coordinates of at least three target contour points, and each two-dimensional coordinate comprises a first coordinate belonging to a first dimension and a second coordinate belonging to a second dimension.
The reference eyebrow map includes two-dimensional coordinates of a plurality of reference contour points, and the reference three-dimensional eyebrow model includes three-dimensional coordinates of a plurality of reference three-dimensional contour points, the three-dimensional coordinates including a first coordinate belonging to a first dimension, a second coordinate belonging to a second dimension, and a third coordinate belonging to a third dimension.
Correspondingly, determining three-dimensional coordinates of at least three target three-dimensional contour points based on the eyebrow contour information, the reference eyebrow map and the reference three-dimensional eyebrow model, comprising: and respectively adjusting the first coordinate and the second coordinate of each target contour point according to the mapping ratio between the reference eyebrow map and the reference three-dimensional eyebrow model, and determining the adjusted first coordinate and second coordinate of each target contour point and the third coordinate of the reference three-dimensional contour point corresponding to each target contour point as the three-dimensional coordinate of one target three-dimensional contour point.
The reference eyebrow map is a two-dimensional map, the reference three-dimensional eyebrow model is a three-dimensional model, and a mapping ratio exists between the reference eyebrow map and the reference three-dimensional eyebrow model. In some embodiments, the adjusting the first coordinate and the second coordinate of each target contour point according to the mapping ratio between the reference eyebrow map and the reference three-dimensional eyebrow model includes: determining a first distance and a second distance corresponding to the reference eyebrow map, wherein the first distance is the difference value of first coordinates of two end points of the plurality of reference contour points, and the second distance is the difference value of second coordinates of the two end points of the plurality of reference contour points; determining a third distance and a fourth distance corresponding to the reference three-dimensional eyebrow model, wherein the third distance is a difference value of first coordinates of two end points of the plurality of reference three-dimensional contour points, and the fourth distance is a difference value of second coordinates of the two end points of the plurality of reference three-dimensional contour points; respectively determining a first proportion and a second proportion, wherein the first proportion is the proportion between the first distance and the third distance, and the second proportion is the proportion between the second distance and the fourth distance; the first coordinate of each target contour point is adjusted based on the first proportion, and the second coordinate of each target contour point is adjusted based on the second proportion.
In addition, the coordinate set of the target contour point corresponding to the target eyebrow map is
Figure BDA0003832242610000191
The coordinate set of the reference three-dimensional contour point corresponding to the reference three-dimensional eyebrow model and the coordinate set of the target three-dimensional contour point are respectively
Figure BDA0003832242610000192
The reference eyebrow map has a coordinate range of the first distance and the second distance
Figure BDA0003832242610000193
The coordinate range formed by the third distance and the fourth distance corresponding to the reference three-dimensional eyebrow model is
Figure BDA0003832242610000201
Thus satisfying the following relationship.
Figure BDA0003832242610000202
In the embodiment of the disclosure, the first coordinate and the second coordinate of the target contour point are adjusted based on the mapping ratio between the reference eyebrow map and the reference three-dimensional eyebrow model, so that the mapping of the first coordinate and the second coordinate of the target contour point is realized, the first coordinate and the second coordinate of the target contour point are mapped to the three-dimensional space to obtain the first coordinate and the second coordinate in the three-dimensional space, and then the third coordinate of the reference three-dimensional contour point is combined to obtain a three-dimensional target contour point, namely a target three-dimensional contour point, so that the target three-dimensional eyebrow model can be obtained based on the three-dimensional coordinates of each target three-dimensional contour point in each eyebrow segment by means of the existing reference three-dimensional eyebrow model, and the accuracy of the target three-dimensional eyebrow model is higher.
2062. And deforming the reference three-dimensional eyebrow model based on the three-dimensional coordinates of each target three-dimensional contour point in each eyebrow segment to obtain a target three-dimensional eyebrow model.
In some embodiments, deforming the reference three-dimensional eyebrow model based on three-dimensional coordinates of each target three-dimensional contour point in each eyebrow segment to obtain a target three-dimensional eyebrow model, includes: for any segment of eyebrow: determining a second interpolation function corresponding to the eyebrow segment based on the three-dimensional coordinates of each target three-dimensional contour point in the eyebrow segment; determining a three-dimensional coordinate corresponding to at least one second interpolation point by adopting a second interpolation function; and deforming the reference three-dimensional eyebrow model based on the end points in each eyebrow segment and the three-dimensional coordinates corresponding to the second interpolation points to obtain a target three-dimensional eyebrow model.
The three-dimensional coordinates are used for representing a target three-dimensional contour point in the eyebrow segment. In the reference three-dimensional eyebrow model, an eyebrow segment comprises at least three target three-dimensional contour points, interpolation can be performed based on three-dimensional coordinates of each target three-dimensional contour point, and the obtained second interpolation function can represent the distribution condition of the target three-dimensional contour points in the eyebrow segment. The definition domain of the second interpolation function is a preset definition domain, and each target three-dimensional contour point is represented by a corresponding value in the preset definition domain. The preset definition field can be set according to the requirement, for example, the preset definition field is [0,1]. And then keeping the two end points of the eyebrow segment unchanged, and selecting uniformly distributed second interpolation points from the eyebrow segment as target contour points of the eyebrow segment without considering other target three-dimensional contour points.
In the embodiment of the disclosure, in the target three-dimensional eyebrow model, since the second interpolation function is determined based on the three-dimensional coordinates of at least three target contour points of the eyebrow segment, the second interpolation function can represent the distribution of the target contour points on the contour of the eyebrow segment, and the three-dimensional coordinates determined based on the second interpolation function and the second interpolation points are also located on the eyebrow segment, the second interpolation points are also contour points on the eyebrow segment, and the positions of the plurality of second interpolation points determined by interpolation are uniform, so that the determined segment contour information can more accurately represent the contour of the eyebrow segment.
In some embodiments, the second interpolation function is used to represent a correspondence between a sequence number of second interpolation points in the eyebrow segment and three-dimensional coordinates of the second interpolation points, where the second interpolation points are points extracted from the eyebrow contour using the second interpolation function, and the sequence number of each second interpolation point refers to an order of each second interpolation point among all the second interpolation points.
And determining the three-dimensional coordinates corresponding to at least one second interpolation point by adopting a second interpolation function, wherein the method comprises the following steps: determining a second interpolation sequence corresponding to the eyebrow segment, wherein the second interpolation sequence comprises the serial numbers of a plurality of second interpolation points; and determining the three-dimensional coordinates corresponding to each second interpolation point by adopting a second interpolation function.
After the second interpolation function is determined, uniform second interpolation points need to be selected from the outline of the eyebrow segment by adopting the second interpolation function, and therefore the three-dimensional coordinates corresponding to at least one second interpolation point are obtained. For example, after obtaining 10 target three-dimensional contour points of the target three-dimensional eyebrow model, the three-dimensional contour points can still be divided into 3 eyebrow segments of the upper edge of the eyebrow, and the lower edge of the eyebrow, taking the upper edge of the eyebrow as an example, and the target three-dimensional contour points in the upper edge of the eyebrow as target three-dimensional contour points
Figure BDA0003832242610000211
Second interpolation function f 3d The following conditions are satisfied.
Figure BDA0003832242610000212
Referring to FIG. 4, the reference three-dimensional contour points of all the characteristic contours of the upper edge of the eyebrow in the reference three-dimensional eyebrow model are obtained by identification
Figure BDA0003832242610000213
The second interpolation sequence is
Figure BDA0003832242610000214
The target three-dimensional contour point can be obtained by interpolation
Figure BDA0003832242610000215
And the reference three-dimensional eyebrow model can be smoothly deformed into a target three-dimensional eyebrow model by an ARAP (As-ridge-As-Possible) method, so that the target three-dimensional eyebrow model matched with the target eyebrow map is obtained.
In the embodiment of the disclosure, since the three-dimensional coordinates of the plurality of reference three-dimensional contour points in the reference three-dimensional eyebrow model are known and the distances between the plurality of reference three-dimensional contour points are set uniformly, by determining the second interpolation sequence, the distances between the plurality of interpolation points in the determined second interpolation sequence are uniform, and the uniformity of the second interpolation sequence is improved.
In step 207, the terminal displays a virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model.
In some embodiments, the target three-dimensional eyebrow model includes three-dimensional coordinates of a plurality of target three-dimensional contour points, the three-dimensional coordinates including a first coordinate belonging to a first dimension, a second coordinate belonging to a second dimension, and a third coordinate belonging to a third dimension; and displaying the virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model, which comprises the following steps 2071-2072.
2071. And respectively adjusting the third coordinate of each target three-dimensional contour point according to the mapping ratio between the reference three-dimensional eyebrow model and the target three-dimensional eyebrow model.
In some embodiments, the adjusting the third coordinate of each target three-dimensional contour point according to the mapping ratio between the reference three-dimensional eyebrow model and the target three-dimensional eyebrow model includes: respectively determining a fifth distance corresponding to the target three-dimensional eyebrow model and a sixth distance corresponding to the reference three-dimensional eyebrow model, wherein the fifth distance is the difference value of the third coordinates of the two end points of the plurality of target three-dimensional contour points, and the sixth distance is the difference value of the third coordinates of the two end points of the plurality of reference three-dimensional contour points; determining a third proportion, wherein the third proportion is the proportion between the fifth distance and the sixth distance; and adjusting the third coordinate of each target three-dimensional contour point based on the third proportion.
In the embodiment of the disclosure, the third coordinates of each target three-dimensional contour point are respectively adjusted according to the mapping ratio between the reference three-dimensional eyebrow model and the target three-dimensional eyebrow model, so that the third coordinates of the target three-dimensional contour points in the target three-dimensional eyebrow model are aligned with the third coordinates of the face contour points in the target face model, and the accuracy of the target three-dimensional eyebrow model is improved.
2072. And displaying a virtual image corresponding to the target face image based on the adjusted target three-dimensional eyebrow model and the adjusted target three-dimensional face model.
In the embodiment of the disclosure, since the target three-dimensional eyebrow model is obtained by deforming the reference three-dimensional eyebrow model, and the third coordinate of the target three-dimensional contour point in the target three-dimensional eyebrow model is the third coordinate of the reference three-dimensional contour point in the reference three-dimensional eyebrow model, the third coordinate of the target three-dimensional contour point in the target three-dimensional eyebrow model is aligned with the third coordinate of the face contour point in the target face model, so that the accuracy of the target three-dimensional eyebrow model is improved, and the display of the target three-dimensional eyebrow model is facilitated.
After the target three-dimensional eyebrow model corresponding to the target eyebrow map is obtained, the model needs to be attached to the target human face model, and therefore the final virtual image is obtained.
The position of the target three-dimensional eyebrow model needs to be aligned with the eyebrow region in the target three-dimensional face model. For example, firstly, 10 face contour point sets corresponding to 10 target three-dimensional contour point semantemes on the target three-dimensional face model are selected and recorded as lmk face The 10 target three-dimensional contour point sets of the target three-dimensional eyebrow model are recorded as v mm Firstly, the range of the third coordinate of the eyebrow area in the target face model is obtained, and firstly, v is measured mm Range aligned to lmk face Range, can be solved for v mm →lmk face Is recorded as T, the vertex of the target three-dimensional eyebrow model is transformed into v mm →v mm *T。
For example, referring to fig. 5, fig. 5 illustrates a target three-dimensional eyebrow model aligned with an eyebrow region in a target three-dimensional face model.
For example, referring to fig. 6, the present disclosure provides a new technique for generating a stylized eyebrow figure that avoids eyebrow blemishes that result from a face-mounted image being applied to a three-dimensional face model. The method comprises the steps of firstly determining a target face map and a target three-dimensional face model based on a target face image, extracting a target eyebrow map from the target face map, extracting eyebrow contour information from the target eyebrow map by utilizing the reference eyebrow map and the reference three-dimensional eyebrow model to represent the eyebrow contour, further converting the target eyebrow map into a smooth target three-dimensional eyebrow model, eliminating the problem of unsmooth eyebrow contour flaws introduced by maps in the virtual image, and pasting the target face map and the target eyebrow map on the target three-dimensional face model, so that the virtual image corresponding to the target face is displayed, and the display effect of the virtual image is improved.
In the embodiment of the disclosure, the eyebrow contour information is extracted from the two-dimensional target eyebrow map, and the eyebrow contour information can represent the eyebrow contour, so that the target three-dimensional eyebrow model corresponding to the target eyebrow map is generated according to the extracted eyebrow contour information, and the conversion from the two-dimensional map to the three-dimensional model for the eyebrow is realized, so that the virtual image corresponding to the target face image can be displayed based on the target three-dimensional eyebrow model and the target three-dimensional face model, the lines of the eyebrow in the displayed virtual image are smooth, and the display effect of the virtual image is improved.
Fig. 7 is a block diagram illustrating a structure of an avatar display apparatus according to an exemplary embodiment. Referring to fig. 7, the apparatus includes:
an obtaining unit 701 configured to perform obtaining a target eyebrow map and a target three-dimensional face model corresponding to a target face image, the target three-dimensional face model not including eyebrows;
an extracting unit 702 configured to perform extracting eyebrow contour information from the target eyebrow map, the eyebrow contour information characterizing contours of eyebrows in the target eyebrow map;
a generating unit 703 configured to perform generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information;
and a display unit 704 configured to display an avatar corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model.
In some embodiments, the extraction unit 702 includes:
the identification subunit is configured to perform contour point identification on the target eyebrow map to obtain two-dimensional coordinates of a plurality of target contour points;
a determining subunit configured to perform determining eyebrow contour information based on the two-dimensional coordinates of the plurality of target contour points, the eyebrow contour information including two-dimensional coordinates of at least three target contour points.
In some embodiments, the determining subunit is configured to perform:
determining a first brow point, a second brow point and a brow tail point from a plurality of target contour points;
dividing the outline of the target eyebrow map into three eyebrow segments by respectively taking the first eyebrow point, the second eyebrow point and the tail point as the end points of the eyebrow segments, wherein each eyebrow segment comprises two end points and at least one target outline point between the two end points;
the method comprises the steps of respectively determining segment contour information corresponding to each eyebrow segment based on two-dimensional coordinates of target contour points in each eyebrow segment, wherein first segment contour information corresponding to the first eyebrow segment comprises two-dimensional coordinates of a first eyebrow point, a second eyebrow point and at least one target contour point between the first eyebrow point and the second eyebrow point, second segment contour information corresponding to the second eyebrow segment comprises two-dimensional coordinates of an eyebrow point, the first eyebrow point and at least one target contour point between the eyebrow point and the first eyebrow point, and third segment contour information corresponding to the third eyebrow segment comprises two-dimensional coordinates of the eyebrow point, the second eyebrow point and at least one target contour point between the eyebrow point and the second eyebrow point.
In some embodiments, the determining subunit is configured to perform:
for any segment of eyebrow:
determining a first interpolation function corresponding to the eyebrow segment based on the two-dimensional coordinates of each target contour point in the eyebrow segment;
determining two-dimensional coordinates corresponding to at least one first interpolation point by adopting a first interpolation function;
and determining the segment contour information corresponding to the eyebrow segment based on the two-dimensional coordinates of the end points in the eyebrow segment and the two-dimensional coordinates corresponding to the at least one first interpolation point.
In some embodiments, the first interpolation function is used to represent a correspondence between a sequence number of a first interpolation point in the eyebrow segment and two-dimensional coordinates of the first interpolation point, and the determining subunit is configured to perform:
determining a first interpolation sequence corresponding to the eyebrow segment, wherein the first interpolation sequence comprises serial numbers of a plurality of first interpolation points;
and determining the two-dimensional coordinates corresponding to each first interpolation point by adopting the first interpolation function.
In some embodiments, the determining subunit is configured to perform:
determining the maximum distance corresponding to the profile information of a reference segment of the eyebrow segment, wherein the maximum distance is the distance between two end points in the profile information of the reference segment, and the profile information of the reference segment is the profile information of the eyebrow segment in a reference three-dimensional eyebrow model corresponding to a reference eyebrow map;
determining the accumulated distance corresponding to each reference three-dimensional contour point in the reference fragment contour information, wherein the accumulated distance is the distance between the reference three-dimensional contour point and a target end point in the reference fragment contour information;
and determining a first interpolation sequence corresponding to the eyebrow segment based on the ratio of the accumulated distance corresponding to each reference three-dimensional contour point to the maximum distance.
In some embodiments, the generating unit 703 includes:
and the deformation subunit is configured to perform deformation on the reference three-dimensional eyebrow model corresponding to the reference eyebrow map based on the eyebrow contour information to obtain a target three-dimensional eyebrow model, so that the eyebrow contour of the target three-dimensional eyebrow model is matched with the eyebrow contour information.
In some embodiments, the eyebrow contour information includes two-dimensional coordinates of at least three target contour points, and the warping subunit is configured to perform:
determining three-dimensional coordinates of at least three target three-dimensional contour points based on the eyebrow contour information, the reference eyebrow paste and the reference three-dimensional eyebrow model;
and deforming the reference three-dimensional eyebrow model based on the three-dimensional coordinates of each target three-dimensional contour point to obtain a target three-dimensional eyebrow model.
In some embodiments, the eyebrow contour information includes first segment contour information corresponding to a first eyebrow segment, second segment contour information corresponding to a second eyebrow segment, and third segment contour information corresponding to a third eyebrow segment, each segment contour information includes two-dimensional coordinates of at least three target contour points, the two-dimensional coordinates include a first coordinate belonging to a first dimension and a second coordinate belonging to a second dimension;
the reference eyebrow map comprises two-dimensional coordinates of a plurality of reference contour points, the reference three-dimensional eyebrow model comprises three-dimensional coordinates of a plurality of reference three-dimensional contour points, and the three-dimensional coordinates comprise a first coordinate belonging to a first dimension, a second coordinate belonging to a second dimension and a third coordinate belonging to a third dimension;
a morphing subunit configured to perform:
respectively adjusting the first coordinate and the second coordinate of each target contour point according to the mapping ratio between the reference eyebrow map and the reference three-dimensional eyebrow model;
and determining the first coordinate and the second coordinate of each target contour point after adjustment and the third coordinate of the reference three-dimensional contour point corresponding to each target contour point as the three-dimensional coordinate of one target three-dimensional contour point.
In some embodiments, a morphing subunit configured to perform:
determining a first distance and a second distance corresponding to the reference eyebrow map, wherein the first distance is the difference value of first coordinates of two end points of the plurality of reference contour points, and the second distance is the difference value of second coordinates of the two end points of the plurality of reference contour points;
determining a third distance and a fourth distance corresponding to the reference three-dimensional eyebrow model, wherein the third distance is a difference value of first coordinates of two end points of the plurality of reference three-dimensional contour points, and the fourth distance is a difference value of second coordinates of the two end points of the plurality of reference three-dimensional contour points;
respectively determining a first proportion and a second proportion, wherein the first proportion is the proportion between the first distance and the third distance, and the second proportion is the proportion between the second distance and the fourth distance;
the first coordinate of each target contour point is adjusted based on the first proportion, and the second coordinate of each target contour point is adjusted based on the second proportion.
In some embodiments, the eyebrow contour information includes two-dimensional coordinates of a target contour point in a plurality of eyebrow segments, and the deformation subunit is configured to perform:
for any segment of eyebrow:
determining a second interpolation function corresponding to the eyebrow segment based on the three-dimensional coordinates of each target three-dimensional contour point in the eyebrow segment;
determining a three-dimensional coordinate corresponding to at least one second interpolation point by adopting a second interpolation function;
and deforming the reference three-dimensional eyebrow model based on the end points in each eyebrow segment and the three-dimensional coordinates corresponding to the second interpolation points to obtain a target three-dimensional eyebrow model.
In some embodiments, the second interpolation function is used to represent a correspondence between a sequence number of a second interpolation point in the eyebrow segment and three-dimensional coordinates of the second interpolation point, and the deformation subunit is configured to perform:
determining a second interpolation sequence corresponding to the eyebrow segment, wherein the second interpolation sequence comprises the serial numbers of a plurality of second interpolation points;
and determining the three-dimensional coordinates corresponding to each second interpolation point by adopting a second interpolation function.
In some embodiments, the target three-dimensional eyebrow model includes three-dimensional coordinates of a plurality of target three-dimensional contour points, the three-dimensional coordinates including a first coordinate belonging to a first dimension, a second coordinate belonging to a second dimension, and a third coordinate belonging to a third dimension;
a display unit 704, comprising:
an adjusting subunit, configured to perform an adjustment of the third coordinates of each target three-dimensional contour point according to a mapping ratio between a reference three-dimensional eyebrow model and a target three-dimensional eyebrow model, the reference three-dimensional eyebrow model including three-dimensional coordinates of a plurality of reference three-dimensional contour points;
and the display subunit is configured to execute displaying of the virtual image corresponding to the target face image based on the adjusted target three-dimensional eyebrow model and the target three-dimensional face model.
In some embodiments, the adjusting subunit is configured to perform:
respectively determining a fifth distance corresponding to the target three-dimensional eyebrow model and a sixth distance corresponding to the reference three-dimensional eyebrow model, wherein the fifth distance is a difference value of third coordinates of two end points of the plurality of target three-dimensional contour points, and the sixth distance is a difference value of the third coordinates of the two end points of the plurality of reference three-dimensional contour points;
determining a third proportion, wherein the third proportion is the proportion between the fifth distance and the sixth distance;
and adjusting the third coordinate of each target three-dimensional contour point based on the third proportion.
In some embodiments, the obtaining unit 701 is configured to perform:
acquiring a target face image;
generating a target face map based on the target face image;
and extracting a target eyebrow map from the target face map.
In the embodiment of the disclosure, the eyebrow contour information is extracted from the two-dimensional target eyebrow map, and the eyebrow contour information can represent the eyebrow contour, so that the target three-dimensional eyebrow model corresponding to the target eyebrow map is generated according to the extracted eyebrow contour information, and the conversion from the two-dimensional map to the three-dimensional model for the eyebrow is realized, so that the virtual image corresponding to the target face image can be displayed based on the target three-dimensional eyebrow model and the target three-dimensional face model, the lines of the eyebrow in the displayed virtual image are smooth, and the display effect of the virtual image is improved.
With regard to the avatar display apparatus in the above-described embodiment, the specific manner in which each unit performs an operation has been described in detail in the embodiment of the related method, and will not be explained in detail here.
Fig. 8 is a block diagram illustrating a structure of a terminal according to an exemplary embodiment. In some embodiments, terminal 800 includes: desktop computers, notebook computers, tablet computers, smart phones or other terminals, etc. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 800 includes: a processor 801 and a memory 802.
In some embodiments, processor 801 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. In some embodiments, the processor 801 is implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). In some embodiments, processor 801 also includes a main processor and a coprocessor, the main processor being a processor for Processing data in the wake state, also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 is integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 801 further includes an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
In some embodiments, memory 802 includes one or more computer-readable storage media that are non-transitory. In some embodiments, memory 802 also includes high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store executable instructions for execution by processor 801 to implement the avatar display method provided by the method embodiments of the present disclosure.
In some embodiments, the terminal 800 may further optionally include: a peripheral interface 803 and at least one peripheral. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are connected by a bus or signal line. In some embodiments, various peripheral devices are connected to peripheral interface 803 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a display screen 805, a camera assembly 806, an audio circuit 807, a positioning assembly 808, and a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 are implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. In some embodiments, the radio frequency circuitry 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. In some embodiments, the radio frequency circuitry 804 communicates with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 further includes NFC (Near Field Communication) related circuitry, which is not limited by this disclosure.
The display screen 805 is used to display a UI (User Interface). In some embodiments, the UI includes graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. In some embodiments, the touch signal is input to the processor 801 as a control signal for processing. At this point, the display 805 is also used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 is one, disposed on a front panel of the terminal 800; in other embodiments, there are at least two display screens 805, each disposed on a different surface of the terminal 800 or in a folded design; in other embodiments, display 805 is a flexible display disposed on a curved surface or a folded surface of terminal 800. Even further, the display 805 is arranged in a non-rectangular irregular figure, i.e., a shaped screen. In some embodiments, the Display 805 is made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 806 is used to capture images or video. In some embodiments, camera assembly 806 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 806 also includes a flash. In some embodiments, the flash is a single color temperature flash, and in some embodiments, the flash is a dual color temperature flash. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and is used for light compensation under different color temperatures.
In some embodiments, the audio circuitry 807 includes a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For stereo capture or noise reduction purposes, in some embodiments, multiple microphones are provided, each at a different location of terminal 800. In some embodiments, the microphone is an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. In some embodiments, the speaker is a conventional membrane speaker, and in some embodiments, the speaker is a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to human, but also the electric signal can be converted into a sound wave inaudible to human for use in distance measurement or the like. In some embodiments, the audio circuitry 807 also includes a headphone jack.
The positioning component 808 is used to locate the current geographic position of the terminal 800 for navigation or LBS (Location Based Service). In some embodiments, the Positioning component 807 is based on the GPS (Global Positioning System) of the United states, the Beidou of China, the Grace Positioning System of Russia, or the Galileo System of the European Union.
Power supply 809 is used to provide power to various components in terminal 800. In some embodiments, power supply 809 is an alternating current, direct current, disposable battery, or rechargeable battery. When the power supply 809 includes a rechargeable battery, the rechargeable battery is a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery is also used to support fast charge technology.
In some embodiments, terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, optical sensor 814, and proximity sensor 815.
In some embodiments, the acceleration sensor 811 detects acceleration magnitudes on three coordinate axes of a coordinate system established with the terminal 800. For example, the acceleration sensor 811 is used to detect the components of the gravitational acceleration in three coordinate axes. In some embodiments, the processor 801 controls the display screen 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. In some embodiments, the acceleration sensor 811 is also used for the acquisition of motion data of a game or user.
In some embodiments, the gyro sensor 812 detects a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 cooperates with the acceleration sensor 811 to acquire a 3D motion of the terminal 800 by the user. The processor 801 can implement the following functions according to the data collected by the gyro sensor 812: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
In some embodiments, pressure sensors 813 are disposed on the side bezel of terminal 800 and/or underneath display screen 805. When the pressure sensor 813 is disposed on the side frame of the terminal 800, the holding signal of the user to the terminal 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The optical sensor 814 is used to collect the ambient light intensity. In one embodiment, processor 801 controls the brightness of display 805 based on the intensity of ambient light collected by optical sensor 814. Specifically, when the ambient light intensity is high, the display brightness of the display screen 805 is increased; when the ambient light intensity is low, the display brightness of the display 805 is reduced. In another embodiment, processor 801 also dynamically adjusts the camera parameters of camera head assembly 806 based on the ambient light intensity collected by optical sensor 814.
A proximity sensor 815, also known as a distance sensor, is typically disposed on the front panel of the terminal 800. The proximity sensor 815 is used to collect the distance between the user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 815 detects that the distance between the user and the front surface of the terminal 800 is gradually reduced, the processor 801 controls the display 805 to switch from a bright screen state to a dark screen state; when the proximity sensor 815 detects that the distance between the user and the front face of the terminal 800 is gradually increasing, the display 805 is controlled by the processor 801 to switch from a rest state to a lighted state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800, and can include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a computer-readable storage medium including instructions, such as a memory including instructions, executable by a processor of a terminal to perform the avatar display method in the above method embodiment. In some embodiments, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and so forth.
In an exemplary embodiment, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the avatar display method in the above-described method embodiments.
In some embodiments, the computer program according to the embodiments of the present disclosure may be deployed to be executed on one electronic device, or on a plurality of electronic devices located at one site, or on a plurality of electronic devices distributed at a plurality of sites and interconnected by a communication network, and the plurality of electronic devices distributed at the plurality of sites and interconnected by the communication network may constitute a block chain system. The electronic device may be provided as a terminal.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. An avatar display method, comprising:
acquiring a target eyebrow paste picture and a target three-dimensional face model corresponding to a target face image, wherein eyebrows are not included in the target three-dimensional face model;
extracting eyebrow contour information from the target eyebrow map, wherein the eyebrow contour information represents the contour of the eyebrow in the target eyebrow map;
generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information;
and displaying a virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model.
2. The method according to claim 1, wherein said extracting eyebrow contour information from said target eyebrow map comprises:
carrying out contour point identification on the target eyebrow map to obtain two-dimensional coordinates of a plurality of target contour points;
determining the eyebrow contour information based on the two-dimensional coordinates of the target contour points, wherein the eyebrow contour information comprises the two-dimensional coordinates of at least three target contour points.
3. The method according to claim 2, wherein said determining said eyebrow contour information based on two-dimensional coordinates of said plurality of target contour points comprises:
determining a first brow point, a second brow point and a brow tail point from the plurality of target contour points;
dividing the outline of the target eyebrow map into three eyebrow segments by taking the first eyebrow point, the second eyebrow point and the tail point as end points of the eyebrow segments respectively, wherein each eyebrow segment comprises two end points and at least one target outline point between the two end points;
respectively determining segment contour information corresponding to each eyebrow segment based on two-dimensional coordinates of the target contour points in each eyebrow segment, wherein first segment contour information corresponding to the first eyebrow segment comprises the first eyebrow point, the second eyebrow point and two-dimensional coordinates of at least one target contour point between the first eyebrow point and the second eyebrow point, second segment contour information corresponding to the second eyebrow segment comprises the eyebrow point, the first eyebrow point and two-dimensional coordinates of at least one target contour point between the eyebrow point and the first eyebrow point, and third segment contour information corresponding to the third eyebrow segment comprises the eyebrow point, the second eyebrow point and two-dimensional coordinates of at least one target contour point between the eyebrow point and the second eyebrow point.
4. The method according to claim 3, wherein said determining segment contour information corresponding to each of said eyebrow segments respectively based on two-dimensional coordinates of said target contour point in each of said eyebrow segments comprises:
for any of the eyebrow segments:
determining a first interpolation function corresponding to the eyebrow segment based on the two-dimensional coordinates of each target contour point in the eyebrow segment;
determining two-dimensional coordinates corresponding to at least one first interpolation point by adopting the first interpolation function;
and determining the segment contour information corresponding to the eyebrow segment based on the two-dimensional coordinates of the end points in the eyebrow segment and the two-dimensional coordinates corresponding to the at least one first interpolation point.
5. The method according to claim 4, wherein the first interpolation function is used to represent a correspondence between a sequence number of a first interpolation point in the eyebrow segment and two-dimensional coordinates of the first interpolation point, and the determining, using the first interpolation function, the two-dimensional coordinates corresponding to at least one first interpolation point comprises:
determining a first interpolation sequence corresponding to the eyebrow segment, wherein the first interpolation sequence comprises sequence numbers of a plurality of first interpolation points;
and determining the two-dimensional coordinates corresponding to each first interpolation point by adopting the first interpolation function.
6. The method of claim 5, wherein determining the first interpolation sequence corresponding to the eyebrow segment comprises:
determining a maximum distance corresponding to the profile information of a reference segment of the eyebrow segment, wherein the maximum distance is a distance between two end points in the profile information of the reference segment, and the profile information of the reference segment is the profile information of the eyebrow segment in a reference three-dimensional eyebrow model corresponding to a reference eyebrow map;
determining the accumulated distance corresponding to each reference three-dimensional contour point in the reference fragment contour information, wherein the accumulated distance is the distance between the reference three-dimensional contour point and a target end point in the reference fragment contour information;
and determining a first interpolation sequence corresponding to the eyebrow segment based on the ratio of the accumulated distance corresponding to each reference three-dimensional contour point to the maximum distance.
7. The method according to claim 1, wherein generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information comprises:
and deforming a reference three-dimensional eyebrow model corresponding to the reference eyebrow paste based on the eyebrow contour information to obtain the target three-dimensional eyebrow model so as to enable the eyebrow contour of the target three-dimensional eyebrow model to be matched with the eyebrow contour information.
8. The method of claim 7, wherein the eyebrow contour information includes two-dimensional coordinates of at least three target contour points, and the transforming a reference three-dimensional eyebrow model corresponding to a reference eyebrow map based on the eyebrow contour information to obtain the target three-dimensional eyebrow model so that the eyebrow contour of the target three-dimensional eyebrow model matches the eyebrow contour information comprises:
determining three-dimensional coordinates of at least three target three-dimensional contour points based on the eyebrow contour information, the reference eyebrow map and the reference three-dimensional eyebrow model;
and deforming the reference three-dimensional eyebrow model based on the three-dimensional coordinates of each target three-dimensional contour point to obtain the target three-dimensional eyebrow model.
9. The method according to claim 8, wherein the eyebrow contour information includes first segment contour information corresponding to a first eyebrow segment, second segment contour information corresponding to a second eyebrow segment, and third segment contour information corresponding to a third eyebrow segment, each segment contour information including two-dimensional coordinates of at least three target contour points, the two-dimensional coordinates including a first coordinate belonging to a first dimension and a second coordinate belonging to a second dimension;
the reference eyebrow map comprises two-dimensional coordinates of a plurality of reference contour points, the reference three-dimensional eyebrow model comprises three-dimensional coordinates of a plurality of reference three-dimensional contour points, the three-dimensional coordinates comprising a first coordinate belonging to the first dimension, a second coordinate belonging to the second dimension, and a third coordinate belonging to the third dimension;
the determining three-dimensional coordinates of at least three target three-dimensional contour points based on the eyebrow contour information, the reference eyebrow map, and the reference three-dimensional eyebrow model includes:
respectively adjusting the first coordinate and the second coordinate of each target contour point according to the mapping ratio between the reference eyebrow map and the reference three-dimensional eyebrow model;
and determining the first coordinate and the second coordinate of each target contour point after adjustment and the third coordinate of the reference three-dimensional contour point corresponding to each target contour point as the three-dimensional coordinate of one target three-dimensional contour point.
10. The method according to claim 9, wherein said adjusting the first and second coordinates of each of said target contour points according to the mapping ratio between said reference eyebrow map and said reference three-dimensional eyebrow model comprises:
determining a first distance and a second distance corresponding to the reference eyebrow map, wherein the first distance is a difference value of first coordinates of two end points of the plurality of reference contour points, and the second distance is a difference value of second coordinates of the two end points of the plurality of reference contour points;
determining a third distance and a fourth distance corresponding to the reference three-dimensional eyebrow model, wherein the third distance is a difference value of first coordinates of two end points of the plurality of reference three-dimensional contour points, and the fourth distance is a difference value of second coordinates of the two end points of the plurality of reference three-dimensional contour points;
respectively determining a first proportion and a second proportion, wherein the first proportion is the proportion between the first distance and the third distance, and the second proportion is the proportion between the second distance and the fourth distance;
and adjusting the first coordinate of each target contour point based on the first proportion, and adjusting the second coordinate of each target contour point based on the second proportion.
11. The method according to claim 8, wherein said eyebrow contour information includes two-dimensional coordinates of said target contour points in a plurality of eyebrow segments, and said deforming said reference three-dimensional eyebrow model based on three-dimensional coordinates of each of said target three-dimensional contour points to obtain said target three-dimensional eyebrow model comprises:
for any of the eyebrow segments:
determining a second interpolation function corresponding to the eyebrow segment based on the three-dimensional coordinates of each target three-dimensional contour point in the eyebrow segment;
determining a three-dimensional coordinate corresponding to at least one second interpolation point by adopting the second interpolation function;
and deforming the reference three-dimensional eyebrow model based on the end points in each eyebrow segment and the three-dimensional coordinates corresponding to the second interpolation point to obtain the target three-dimensional eyebrow model.
12. The method according to claim 11, wherein the second interpolation function is used to represent a corresponding relationship between a sequence number of a second interpolation point in the eyebrow segment and three-dimensional coordinates of the second interpolation point, and the determining, by using the second interpolation function, the three-dimensional coordinates corresponding to at least one second interpolation point comprises:
determining a second interpolation sequence corresponding to the eyebrow segment, wherein the second interpolation sequence comprises sequence numbers of a plurality of second interpolation points;
and determining the three-dimensional coordinates corresponding to each second interpolation point by adopting the second interpolation function.
13. The method according to claim 1, wherein said target three-dimensional eyebrow model comprises three-dimensional coordinates of a plurality of target three-dimensional contour points, said three-dimensional coordinates comprising a first coordinate belonging to a first dimension, a second coordinate belonging to a second dimension, and a third coordinate belonging to a third dimension;
the displaying of the virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model includes:
respectively adjusting the third coordinate of each target three-dimensional contour point according to the mapping ratio between a reference three-dimensional eyebrow model and the target three-dimensional eyebrow model, wherein the reference three-dimensional eyebrow model comprises three-dimensional coordinates of a plurality of reference three-dimensional contour points;
and displaying a virtual image corresponding to the target face image based on the adjusted target three-dimensional eyebrow model and the adjusted target three-dimensional face model.
14. The method according to claim 13, wherein said adjusting the third coordinate of each of said target three-dimensional contour points according to the mapping ratio between the reference three-dimensional eyebrow model and said target three-dimensional eyebrow model comprises:
respectively determining a fifth distance corresponding to the target three-dimensional eyebrow model and a sixth distance corresponding to the reference three-dimensional eyebrow model, wherein the fifth distance is a difference value of third coordinates of two end points of the plurality of target three-dimensional contour points, and the sixth distance is a difference value of third coordinates of two end points of the plurality of reference three-dimensional contour points;
determining a third ratio, the third ratio being a ratio between the fifth distance and the sixth distance;
and adjusting the third coordinate of each target three-dimensional contour point based on the third proportion.
15. The method according to any one of claims 1-14, wherein obtaining the target eyebrow map comprises:
acquiring a target face image;
generating the target face map based on the target face image;
and extracting the target eyebrow map from the target face map.
16. An avatar display apparatus, said apparatus comprising:
the acquisition unit is configured to acquire a target eyebrow map and a target three-dimensional face model corresponding to a target face image, wherein eyebrows are not included in the target three-dimensional face model;
an extracting unit configured to extract eyebrow contour information from the target eyebrow map, the eyebrow contour information characterizing contours of eyebrows in the target eyebrow map;
a generating unit configured to execute generating a target three-dimensional eyebrow model corresponding to the target eyebrow map based on the eyebrow contour information;
and the display unit is configured to display the virtual image corresponding to the target face image based on the target three-dimensional eyebrow model and the target three-dimensional face model.
17. A terminal, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the avatar display method of any of claims 1-15.
18. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of a terminal, enable the terminal to perform the avatar display method of any of claims 1-15.
CN202211079191.6A 2022-09-05 2022-09-05 Virtual image display method, device, terminal and storage medium Pending CN115393562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211079191.6A CN115393562A (en) 2022-09-05 2022-09-05 Virtual image display method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211079191.6A CN115393562A (en) 2022-09-05 2022-09-05 Virtual image display method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN115393562A true CN115393562A (en) 2022-11-25

Family

ID=84124857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211079191.6A Pending CN115393562A (en) 2022-09-05 2022-09-05 Virtual image display method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN115393562A (en)

Similar Documents

Publication Publication Date Title
CN110929651B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110189340B (en) Image segmentation method and device, electronic equipment and storage medium
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN109308727B (en) Virtual image model generation method and device and storage medium
CN109191549B (en) Method and device for displaying animation
CN109829864B (en) Image processing method, device, equipment and storage medium
CN109947338B (en) Image switching display method and device, electronic equipment and storage medium
CN111028144B (en) Video face changing method and device and storage medium
CN110263617B (en) Three-dimensional face model obtaining method and device
CN112581358B (en) Training method of image processing model, image processing method and device
WO2022042425A1 (en) Video data processing method and apparatus, and computer device and storage medium
WO2022052620A1 (en) Image generation method and electronic device
CN109978996B (en) Method, device, terminal and storage medium for generating expression three-dimensional model
CN111447389B (en) Video generation method, device, terminal and storage medium
CN112907725A (en) Image generation method, image processing model training method, image processing device, and image processing program
CN109547843B (en) Method and device for processing audio and video
CN110991457A (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN110956580A (en) Image face changing method and device, computer equipment and storage medium
CN111105474B (en) Font drawing method, font drawing device, computer device and computer readable storage medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN110991445A (en) Method, device, equipment and medium for identifying vertically arranged characters
CN110619614A (en) Image processing method and device, computer equipment and storage medium
CN114140342A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112135191A (en) Video editing method, device, terminal and storage medium
CN112967261B (en) Image fusion method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination