CN110796083B - Image display method, device, terminal and storage medium - Google Patents

Image display method, device, terminal and storage medium Download PDF

Info

Publication number
CN110796083B
CN110796083B CN201911039786.7A CN201911039786A CN110796083B CN 110796083 B CN110796083 B CN 110796083B CN 201911039786 A CN201911039786 A CN 201911039786A CN 110796083 B CN110796083 B CN 110796083B
Authority
CN
China
Prior art keywords
face
dimensional model
dimensional
image
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911039786.7A
Other languages
Chinese (zh)
Other versions
CN110796083A (en
Inventor
曹玮剑
曹煊
赵艳丹
葛彦昊
汪铖杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911039786.7A priority Critical patent/CN110796083B/en
Publication of CN110796083A publication Critical patent/CN110796083A/en
Application granted granted Critical
Publication of CN110796083B publication Critical patent/CN110796083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an image display method, an image display device, a terminal and a storage medium, and belongs to the technical field of multimedia. According to the scheme provided by the embodiment of the application, at least one first projection point is determined through the second gesture parameters corresponding to the target image and the adjusted face three-dimensional model, and the three-dimensional image is displayed in the target image according to the coordinate value of the at least one first projection point, so that the terminal can obtain the face three-dimensional model conforming to the target image through adjustment of the face three-dimensional model when the three-dimensional image is displayed, and the three-dimensional image is displayed. The face three-dimensional model has high universality, is convenient and flexible to adjust, has accurate adjustment results, and can be widely applied to the display of three-dimensional images.

Description

Image display method, device, terminal and storage medium
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to an image display method, an image display device, a terminal, and a storage medium.
Background
Along with the rapid development of the face registration technology, the method has been widely applied to face recognition and face beautification at present. The face registration technology may also be referred to as a face key point positioning technology, which means that when a piece of face image is given, the computer device can automatically position points with specific semantics on the face image, such as corners of eyes, corners of mouth, tips of nose, eyebrows, contours, and the like. In some application scenarios represented by beauty and 3D pendants, in order to accurately display three-dimensional images such as 3D pendants on a human face, some points with ambiguous semantics beyond the specific semantic points need to be positioned, such as cheeks, apple muscles, french marks, eye bags and the like.
One implementation manner in the related art is to adopt a geometric estimation method to locate the points with ambiguous semantics, thereby realizing the display of the three-dimensional image on the human face. Specifically, through some statistical rules, the coordinates of other points are estimated according to the coordinates of the existing specific semantic points. If the position of the forehead point is on the connection line between the nose bottom point and the nose bridge point, and the distance between the nose bridge point and the forehead point is 1.5 times of the distance between the nose bridge point and the nose bottom point, the position of the forehead point can be determined under the condition that the coordinates of the nose bottom point and the nose bridge point are known.
The problem with the above technical solution is that, for each point to be determined, a calculation rule needs to be set for the point to be determined, and the calculation rule has no generality, and for different people or different expressions and gestures of the same person, a corresponding calculation rule needs to be set. From the above analysis, the limitation of the method of geometric estimation is too high, so that the method cannot be widely applied to the display of three-dimensional images.
Disclosure of Invention
The embodiment of the application provides an image display method, an image display device, a terminal and a storage medium, which are used for solving the problem that the method cannot be widely applied to three-dimensional image display due to the fact that the limitation of the conventional geometric estimation method is too high. The technical scheme is as follows:
In one aspect, there is provided an image display method including:
determining a second gesture parameter corresponding to a target image according to facial features included in the target image, a first corresponding relation between a facial three-dimensional model in a previous frame image of the target image and the facial features, and a first gesture parameter corresponding to the previous frame image;
according to the second gesture parameters, the face three-dimensional model is adjusted;
determining coordinate values of at least one projection point included in the target image according to the second gesture parameters and the adjusted face three-dimensional model;
and displaying the three-dimensional image to be displayed in the target image according to the coordinate value of the at least one first projection point.
In another aspect, there is provided an image display apparatus including:
the determining module is configured to determine a second gesture parameter corresponding to a target image according to facial features included in the target image, a first corresponding relation between a facial three-dimensional model in a previous frame image of the target image and the facial features, and a first gesture parameter corresponding to the previous frame image;
an adjustment module configured to adjust the facial three-dimensional model according to the second pose parameter;
The determining module is further configured to determine coordinate values of at least one projection point included in the target image according to the second gesture parameter and the adjusted face three-dimensional model;
and the display module is configured to display a three-dimensional image to be displayed in the target image according to the coordinate value of the at least one first projection point.
In a possible implementation manner, the adjusting module is further configured to adjust the pose of the face three-dimensional model according to the second pose parameter; determining a second corresponding relation between the facial features and the adjusted facial three-dimensional model according to the adjusted facial three-dimensional model; and determining the adjusted face three-dimensional model based on the second corresponding relation.
In one possible implementation, the facial features include facial features and contour features; the adjusting module is further configured to keep the corresponding relation between the three-dimensional coordinate points in the face three-dimensional model and the two-dimensional coordinate points included in the five-sense features unchanged, and determine a plurality of three-dimensional coordinate points from the adjusted face three-dimensional model according to a preset sampling sequence; and corresponding the plurality of three-dimensional coordinate points in the face three-dimensional model with two-dimensional coordinate points included by the outline features according to the preset sampling sequence to obtain a second corresponding relation between the face features and the adjusted face three-dimensional model.
In a possible implementation manner, the adjusting module is further configured to obtain at least one preset parallel line segment in the face three-dimensional model; and selecting a coordinate point positioned on the boundary of the three-dimensional model of the face from each parallel line segment according to a preset sampling sequence, wherein the coordinate point corresponds to a two-dimensional coordinate point included by the outline feature.
In a possible implementation manner, the adjusting module is further configured to acquire a second feature parameter and a second expression parameter that determine a face three-dimensional model in a previous frame image of the target image;
and determining a first characteristic parameter and a first expression parameter corresponding to the adjusted facial three-dimensional model according to the coordinate values of the two-dimensional coordinate points included by the facial features, the coordinate values of the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included by the facial features on the facial three-dimensional model, the second characteristic parameter and the second expression parameter.
In one possible implementation, the apparatus further includes:
the updating module is configured to update the second gesture parameters according to the coordinate values of the two-dimensional coordinate points included by the facial features, the coordinate values of the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included by the facial features on the three-dimensional facial model and the first gesture parameters;
The adjusting module is further configured to adjust the pose of the face three-dimensional model according to the updated second pose parameters;
the updating module is further configured to update the second gesture parameter again until a preset update stopping condition is reached.
In one possible implementation, the display module is further configured to acquire a three-dimensional image to be displayed; and displaying at least one image point included in the three-dimensional image on the at least one first projection point in sequence according to the coordinate value of the at least one first projection point.
In one possible implementation, the apparatus further includes:
the tracking module is configured to carry out face tracking on coordinate points of facial features included in the previous frame of image;
the determining module is further configured to determine a two-dimensional coordinate point included by a facial feature included by the target image according to the result of the face tracking.
In one possible implementation, the apparatus further includes:
the generation module is configured to generate the face three-dimensional model according to a preset third characteristic parameter and a third expression parameter;
the determining module is further configured to determine a third gesture parameter corresponding to the first frame image according to the five-sense organ feature included in the first frame image and a third corresponding relation between the face three-dimensional model and the five-sense organ feature;
The adjusting module is further configured to adjust the face three-dimensional model according to the third gesture parameter;
the determining module is further configured to determine coordinate values of at least one second projection point included in the first frame image according to the third gesture parameter and the adjusted face three-dimensional model;
the display module is further configured to display the three-dimensional image in the first frame image according to the coordinate values of the at least one second projection point.
In one possible implementation, the apparatus further includes:
the detection registration module is configured to perform face detection and face registration on the first frame image, and determine at least one two-dimensional coordinate point included by the five-element feature included in the first frame image.
In a possible implementation manner, the adjusting module is further configured to adjust the pose of the face three-dimensional model according to the third pose parameter; acquiring contour features included in the first frame image, wherein the contour features and the five sense organs features form the facial features; selecting three-dimensional coordinate points corresponding to two-dimensional coordinate points included by the outline features from the adjusted face three-dimensional model to obtain a fourth corresponding relation between the face features and the face three-dimensional model in the first frame image; and adjusting the face three-dimensional model based on the fourth corresponding relation.
In a possible implementation manner, the adjusting module is further configured to update the third gesture parameter according to the five-sense feature, the contour feature and the fourth correspondence; according to the updated third gesture parameters, the gesture of the face three-dimensional model is adjusted; updating the fourth corresponding relation according to the three-dimensional model of the face after the posture adjustment; and determining the adjusted face three-dimensional model according to the updated fourth corresponding relation and the updated third posture parameter.
In any of the foregoing possible implementations, a ratio of the face area covered by the at least one first projection point to a total face area included in the target image is greater than a target ratio threshold.
In another aspect, a terminal is provided that includes a processor and a memory for storing at least one piece of program code that is loaded and executed by the processor to implement operations performed in an image display method in an embodiment of the present application.
In another aspect, a storage medium is provided, where at least one section of program code is stored, where the at least one section of program code is used to perform the image display method in the embodiment of the present application.
The beneficial effects that technical scheme that this application embodiment provided brought are:
and determining a second posture parameter corresponding to the target image by combining the corresponding relation between the facial three-dimensional model and the facial features determined in the previous frame image and the first posture parameter corresponding to the previous frame image according to the facial features included in the target image when the target image is processed. Since at least one first projection point is determined by the second pose parameter and the face three-dimensional model adjusted according to the second pose parameter, and a three-dimensional image is displayed in the target image according to the coordinate value of the at least one first projection point. When the terminal displays the three-dimensional image, the three-dimensional model of the face conforming to the target image can be obtained through adjusting the three-dimensional model of the face, so that the three-dimensional image is displayed. The face three-dimensional model has high universality, is convenient and flexible to adjust, has accurate adjustment results, and can be widely applied to the display of three-dimensional images.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation environment of an image display method according to an embodiment of the present application;
fig. 2 is a flowchart of an image display method according to an embodiment of the present application;
fig. 3 is a schematic diagram of two-dimensional coordinate points included in a five-sense feature and three-dimensional coordinate points on a three-dimensional model of a face according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a three-dimensional model of a face defining groups of parallel line segments provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a dense two-dimensional coordinate point provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a display of a three-dimensional image provided by an embodiment of the present application;
FIG. 7 is a flowchart of another image display method according to an embodiment of the present application;
fig. 8 is a block diagram of an image display apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The embodiment of the application mainly relates to a scene for displaying a three-dimensional image on a two-dimensional image, and specifically relates to the scene as follows: when a user processes an image using an application program for image beautification, some three-dimensional images, such as 3D (three-dimensional) headwear, 3D facial makeup, make-up, etc., may be added to the image to be processed. The three-dimensional image can be correspondingly adjusted along with the action of the object in the image to be processed, for example, a 3D cartoon hat is added on a face included in the image to be processed by a user, and when the face rotates, the 3D cartoon hat can rotate along with the face.
The main flow of the image display method provided in the embodiment of the present application is briefly described below, first, the facial features included in the current frame image, that is, the target image, the corresponding relationship between the facial three-dimensional model and the facial features determined when the previous frame image is processed, and the first pose parameter corresponding to the previous frame image are obtained, and the second pose parameter corresponding to the target image is determined based on the obtained information. Then, the face three-dimensional model is adjusted according to the second gesture parameters, so that a face three-dimensional model conforming to the target image is obtained, and then, coordinate values of at least one projection point, which can be a dense projection point, included in the target image are determined according to the second gesture parameters and the adjusted face three-dimensional model. And finally, positioning the three-dimensional image to be displayed according to the coordinate value of the at least one projection point, and displaying the three-dimensional image on the target image.
Fig. 1 is a schematic diagram of an implementation environment of an image display method according to an embodiment of the present application, as shown in fig. 1, where the implementation environment includes: a terminal 110 and a server 120.
Terminal 110 may be connected to server 120 through a wireless network or a wired network. Terminal 110 may be at least one of a smart phone, a video camera, a desktop computer, a tablet computer, an MP4 player, and a laptop portable computer. The terminal 110 is installed and runs an application for an image processing function. The application may be an image processing class application, a video processing class application, or a social class application, among others. Illustratively, the terminal 110 may be a terminal used by a user, and an account number of the user is logged into an application program running on the terminal 110.
Server 120 includes at least one of a server, a plurality of servers, and a cloud computing platform. The server 120 is used to provide a background service for image display. The optional server 120 takes over primary image display work and the terminal 110 takes over secondary image display work; or the server 120 performs a secondary image display operation and the terminal 110 performs a primary image display operation; alternatively, the server 120 and the terminal 110 may each independently undertake image display work.
Optionally, the server 120 includes: an access server, an image display server and a database. The access server is used for providing access services for the terminal 110. The image display server is used for determining at least one projection point according to the target image. The image display server may be one or more, and when the image display server is a plurality of image display servers, there are at least two image display servers for providing different services, and/or there are at least two image display servers for providing the same service, such as providing the same service in a load balancing manner or providing the same service in a manner of a main server and a mirror server, which is not limited in the embodiment of the present application. The database is used for storing the images to be processed and the three-dimensional images to be displayed, which are uploaded by the user. The information stored in the database is the information which the user is authorized to use.
Terminal 110 may refer broadly to one of a plurality of terminals, with the present embodiment being illustrated only by terminal 110. Those skilled in the art will recognize that the number of terminals may be greater or lesser. For example, the number of the terminals 110 may be only one, or the number of the terminals may be tens or hundreds, or more, where other terminals are also included in the implementation environment. The number and type of terminals are not limited by the embodiments of the present disclosure.
Fig. 2 is a flowchart of an image display method according to an embodiment of the present application, as shown in fig. 2, the method includes the following steps:
201. and the terminal generates a face three-dimensional model according to the preset third characteristic parameter and the third expression parameter.
In the embodiment of the present application, the feature parameter is used to represent the weight of the principal component of the feature dimension, and the expression parameter is used to represent the weight of the principal component of the expression dimension. Wherein the principal component of the feature dimension represents a change in face shape, such as wide, narrow, fat, thin, etc. The principal components of the expression dimension represent the variation of expression, such as mouth opening, eye closing, etc. The principal components of the feature dimension and the principal components of the expression dimension have common numerical values. The terminal can adjust the universal average face model according to the preset third characteristic parameter and the third expression parameter, and the main component of the universal characteristic dimension and the main component of the expression dimension, so as to generate the face three-dimensional model. The average face three-dimensional model can be obtained through model training, and an existing model can also be directly obtained, and the application is not particularly limited.
In an alternative implementation manner, the terminal may acquire the preset third characteristic parameter and third expression parameter, that is, initial values of the characteristic parameter and the expression parameter, when processing the first frame image or before processing the first frame image. And then acquiring the average face three-dimensional model, the principal components of the feature dimension and the general numerical value of the principal components of the expression dimension. Then, the terminal can calculate the obtained data based on the formula (1) to generate the face three-dimensional model.
Figure BDA0002252512650000081
Wherein S represents a three-dimensional model of the face,
Figure BDA0002252512650000082
representing an average face three-dimensional model, A id Principal component, alpha, representing characteristic dimensions id Representing characteristic parameters, A exp Principal component representing expression dimension, alpha exp Representing the expression parameters.
In an alternative implementation manner, the terminal may perform face detection and face registration on the first frame image, determine at least one two-dimensional coordinate point included in the five-element feature included in the first frame image through face detection, and establish a third corresponding relationship between the two-dimensional coordinate point included in the five-element feature and the three-dimensional coordinate point on the face three-dimensional model through face registration. The five-way feature may include an eye, an ear, a nose, a mouth, etc., and each of the five-way features may include at least one two-dimensional coordinate point.
For example, referring to fig. 3, fig. 3 is a schematic diagram of two-dimensional coordinate points included in a five-sense feature and three-dimensional coordinate points on a three-dimensional model of a face according to an embodiment of the present application. Each two-dimensional coordinate point corresponds to a three-dimensional coordinate point on the face three-dimensional model.
202. And the terminal determines a third posture parameter corresponding to the first frame image according to a third corresponding relation between the face three-dimensional model and the five-sense organ features included in the first frame image.
In the embodiment of the application, when the terminal processes the first frame image, the two-dimensional coordinate points included by the five-sense organ features included in the first frame image can be determined. The terminal can also determine a three-dimensional coordinate point corresponding to the two-dimensional coordinate point included by the five-element feature on the face three-dimensional model according to the third corresponding relation between the face three-dimensional model and the five-element feature. And the terminal obtains the minimum value of the third gesture parameter based on the two-dimensional coordinate points included by the five-sense feature, the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included by the five-sense feature on the face three-dimensional model and the third corresponding relation. Since the pose changes of the face are small changes and no abrupt change occurs, the minimum value of the third pose parameter is also the optimal solution of the third pose parameter.
In an alternative implementation manner, the terminal may input the coordinate values of the two-dimensional coordinate point and the three-dimensional coordinate point obtained above into the following formula (2) to obtain the optimal solution of the third gesture parameter. The third pose parameters may include a third zoom parameter, a third rotation parameter, and a third translation parameter. Wherein the scaling parameter is used to indicate the scaling between the face three-dimensional model and the first frame image. The rotation parameters are used to rotate the pose of the facial three-dimensional model. The translation parameter is used for indicating a translation component of a three-dimensional coordinate point on the face three-dimensional model on a plane.
Figure BDA0002252512650000091
Where K represents the number of coordinate pairs having a correspondence, s represents a scaling parameter, R represents a rotation parameter,
Figure BDA0002252512650000092
coordinate values representing three-dimensional coordinate points corresponding to two-dimensional coordinate points included in the five-sense features on the face three-dimensional model, T representing a translation parameter, V k Representing three-dimensional coordinate points corresponding to two-dimensional coordinate points included by five-sense features on the face three-dimensional model, I k Coordinate values representing two-dimensional coordinate points included in the five-element feature.
203. And the terminal adjusts the face three-dimensional model according to the third gesture parameters.
In the embodiment of the present application, the process of adjusting the three-dimensional model of the face by the terminal according to the third gesture parameter may be implemented through sub-steps 203a to 203 d.
203a, the terminal may adjust the pose of the three-dimensional model of the face according to the third pose parameter.
After determining the third gesture parameter, the terminal may rotate the face three-dimensional model to a corresponding gesture according to a third rotation parameter included in the third gesture parameter when adjusting the face three-dimensional model. Wherein the third rotation parameter may be represented as a rotation matrix.
203b, after the terminal adjusts the gesture of the face three-dimensional model, the outline features included in the first frame image can be obtained.
The terminal can sample the face area included in the first frame image according to a preset sampling sequence, and the terminal can obtain a plurality of coordinate points as coordinate points included in the contour features through sampling. The predetermined sampling sequence may be to start from the upper left position of the face region, move along the boundary of the face region, and until the upper right position of the face region.
The facial features include the outline features and the five sense organs features described above.
203c, the terminal may select a three-dimensional coordinate point corresponding to the two-dimensional coordinate point included in the contour feature from the adjusted face three-dimensional model, so as to obtain a fourth corresponding relationship between the face feature and the face three-dimensional model in the first frame image.
The fourth corresponding relation comprises a corresponding relation between two-dimensional coordinate points included by the outline features in the first frame image and three-dimensional coordinate points in the face three-dimensional model, and also comprises a corresponding relation between two-dimensional coordinate points included by the five-element features in the first frame image and three-dimensional coordinate points in the face three-dimensional model.
In an alternative implementation manner, the terminal may define several groups of parallel line segments on the three-dimensional face model according to the topological structure of the model, where the several groups of parallel line segments are used to determine three-dimensional coordinate points corresponding to the two-dimensional coordinate points included in the outline feature on the three-dimensional face model. For example, referring to fig. 4, fig. 4 is a schematic diagram of a three-dimensional model of a face defined with several groups of parallel line segments according to an embodiment of the present application. Correspondingly, the step of selecting, by the terminal, a three-dimensional coordinate point corresponding to the two-dimensional coordinate point included in the contour feature from the adjusted face three-dimensional model may be: the terminal can acquire at least one parallel line segment preset in the face three-dimensional model. The terminal may select, from each parallel line segment, a three-dimensional coordinate point located at the boundary of the face three-dimensional model corresponding to the two-dimensional coordinate point included in the contour feature according to a preset sampling sequence, that is, the same sampling sequence as that of the two-dimensional coordinate point included in the contour feature. And establishing a corresponding relation between the three-dimensional coordinate points on the face three-dimensional model and the two-dimensional coordinate points included by the outline features in the first frame image through the same sampling sequence, so as to combine the first corresponding relation and obtain a fourth corresponding relation.
203d, the terminal may adjust the face three-dimensional model based on the fourth correspondence relation.
In this embodiment of the present application, the terminal may update the third gesture parameter according to the five-sense feature, the outline feature, and the fourth correspondence. And the terminal adjusts the gesture of the three-dimensional model of the face according to the updated third gesture parameter. And then the terminal updates the fourth corresponding relation according to the three-dimensional model of the face after the posture adjustment. And finally, the terminal can determine the adjusted face three-dimensional model according to the updated fourth corresponding relation and the updated third posture parameter. Wherein the terminal can calculate the updated third posture parameter according to the above formula (2), which is different from the first calculation of the third posture parameter in that, when the third posture parameter is updated,
Figure BDA0002252512650000101
representing the coordinate value of a three-dimensional coordinate point corresponding to a two-dimensional coordinate point included in the facial feature on the three-dimensional model of the face, V k Representing three-dimensional coordinate points corresponding to two-dimensional coordinate points included in the facial feature on the three-dimensional model of the face, I k Coordinate values representing two-dimensional coordinate points included in the facial feature.
In an optional implementation manner, after the terminal adjusts the pose of the face three-dimensional model, the terminal may resample the face three-dimensional model after the pose adjustment based on the preset sampling sequence to update the fourth correspondence. Correspondingly, the step of updating the fourth corresponding relationship by the terminal according to the three-dimensional face model after the posture adjustment may be: the terminal may maintain a first correspondence relationship between the two-dimensional coordinate points included in the facial feature and the three-dimensional coordinate points in the three-dimensional model of the face unchanged. And the terminal determines a plurality of three-dimensional coordinate points from the adjusted face three-dimensional model according to a preset sampling sequence. The terminal may correspond a plurality of three-dimensional coordinate points determined from the face three-dimensional model to two-dimensional coordinate points included in the facial feature according to the preset sampling sequence.
In an optional implementation manner, after the terminal obtains the updated fourth corresponding relation and the updated third gesture parameter, the fourth feature parameter and the fourth expression parameter with the smallest change compared with the third feature parameter and the third expression parameter can be determined based on the general average face model, the principal component of the feature dimension and the principal component of the expression dimension according to the two-dimensional coordinate point included in the facial feature, the three-dimensional coordinate point corresponding to the two-dimensional coordinate point included in the facial feature on the face three-dimensional model, the fourth corresponding relation and the third gesture parameter. The fourth feature parameter and the fourth expression parameter have the smallest change relative to the third feature parameter and the third expression parameter, so the method can be used as an optimal solution of the fourth feature parameter and the fourth expression parameter. The terminal can determine the adjusted face three-dimensional model according to the obtained fourth characteristic parameter and the fourth expression parameter.
In an alternative implementation manner, the terminal may calculate an optimal solution of the fourth feature parameter and the fourth expression parameter corresponding to the adjusted face three-dimensional model based on the formula (3). Specifically, the terminal may input the updated third gesture parameter into the formula (3), and then input the coordinate value of the two-dimensional coordinate point included in the five-element feature and the coordinate value of the three-dimensional coordinate point corresponding to the two-dimensional coordinate point included in the five-element feature on the face three-dimensional model into the formula (3) according to the updated fourth correspondence.
Figure BDA0002252512650000111
Where K represents the number of coordinate pairs having a correspondence, s represents a scaling parameter, R represents a rotation parameter,
Figure BDA0002252512650000112
representing an average face three-dimensional model, A id Principal component, alpha, representing characteristic dimensions id Representing characteristic parameters, A exp Principal component representing expression dimension, alpha exp Representing expression parameters, T representing translation parameters, V k Representing three-dimensional coordinate points corresponding to two-dimensional coordinate points included in the facial feature on the three-dimensional model of the face, I k Coordinate values representing two-dimensional coordinate points included in the facial feature.
It should be noted that, because the terminal updates the third gesture parameter based on the fourth correspondence, and then adjusts the gesture of the face three-dimensional model according to the updated third gesture parameter, at this time, the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included in the contour feature in the face three-dimensional model are changed, so that the terminal redetermines the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included in the contour feature from the face three-dimensional model, that is, updates the fourth correspondence. And the third posture parameter also needs to be updated due to the update of the fourth corresponding relation, so that the posture of the face three-dimensional model is adjusted accordingly. That is, the fourth correspondence, the third pose parameter, and the pose of the face three-dimensional model are interrelated, and any one update may result in the other two also being updated. And the terminal can perform iterative updating on the fourth corresponding relation, the third gesture parameter and the gesture of the face three-dimensional model until the iterative times or the third gesture parameter convergence is reached.
204. And the terminal determines coordinate values of at least one second projection point included in the first frame image according to the third gesture parameter and the adjusted face three-dimensional model, wherein the ratio of the face area covered by the at least one projection point to the total face area included in the target image is greater than a target ratio threshold.
The terminal may project the dense three-dimensional coordinate points included in the adjusted face three-dimensional model into the first frame image according to the third scaling parameter, the third rotation parameter and the third translation parameter included in the third pose parameter, so as to determine coordinate values of at least one projection point included in the first frame image. The dense three-dimensional coordinate points are coordinate points obtained by sampling at a sampling frequency greater than a sampling frequency threshold, and the dense three-dimensional coordinate points not only comprise three-dimensional coordinate points corresponding to five-sense organ features and three-dimensional coordinate points corresponding to outline features, but also comprise three-dimensional coordinate points corresponding to forehead areas, cheek areas, chin areas and other areas of the face. Thus, the at least one second projection point is also a dense two-dimensional coordinate point in the two-dimensional image. Correspondingly, the three-dimensional coordinate points corresponding to the two-dimensional coordinates included in the five-element feature and the three-dimensional coordinate points corresponding to the two-dimensional coordinates included in the outline feature on the face three-dimensional model may be referred to as sparse three-dimensional coordinate points. Because the terminal projects the dense three-dimensional coordinate points into the two-dimensional image, dense two-dimensional coordinate points are obtained, and compared with the method that only sparse three-dimensional coordinate points are projected onto the two-dimensional image, sparse two-dimensional coordinate points are obtained. The dense two-dimensional coordinate points can more finely represent details of the face included in the two-dimensional image, so that the three-dimensional image displayed by the subsequent terminal can be more attached to the face included in the two-dimensional image.
For example, referring to fig. 5, fig. 5 is a schematic diagram of a dense two-dimensional coordinate point according to an embodiment of the present application. In fig. 5 a face covered by at least one second projection point, i.e. a dense two-dimensional coordinate point, is shown. The dense two-dimensional coordinate points are obtained by the terminal projecting the dense three-dimensional coordinate points in the face three-dimensional model onto the two-dimensional image according to the third gesture parameters determined in the steps.
In an alternative implementation, the terminal may determine the coordinate value of each projection point through the following formula (4).
L=s*R*S+T (4);
Wherein L represents coordinate values of the projection points, S represents a scaling parameter, R represents a rotation parameter, S represents coordinate values of three-dimensional coordinate points on the three-dimensional model of the face, and T represents a translation parameter.
205. And the terminal displays the three-dimensional image in the first frame image according to the coordinate value of at least one second projection point.
When the terminal displays the three-dimensional image in the target image, the terminal may first acquire the three-dimensional image to be displayed. The terminal may sequentially display at least one image point included in the three-dimensional image on the at least one first projection point according to the coordinate values of the at least one first projection point. The number of the at least one image point included in the three-dimensional image may be the same as or different from the number of the at least one first projection point.
For example, referring to fig. 6, fig. 6 is a schematic diagram showing a three-dimensional image according to an embodiment of the present application. In fig. 6, the three-dimensional image is a 3D cartoon headgear, the first frame image is an image including a human face, and the terminal displays the 3D cartoon headgear on the human face through the coordinate value of at least one second projection point.
In an alternative implementation manner, when the number of at least one image point included in the three-dimensional image is greater than the number of at least one first projection point, the terminal may determine a plurality of reference points from the three-dimensional image, and display the plurality of reference points and a preset number of points around each reference point on the at least one first projection point respectively.
In an alternative implementation, when the number of at least one image point included in the three-dimensional image is equal to the number of at least one first projection point, the terminal may sequentially display the at least one image point included in the three-dimensional image on the at least one first projection point.
In an alternative implementation manner, when the number of at least one image point included in the three-dimensional image is smaller than the number of at least one first projection point, the terminal may determine a plurality of reference points from the three-dimensional image, and the plurality of reference points and points around each reference point are respectively displayed on part of the first projection points.
The steps 201 to 205 are processes of the terminal processing the first frame image, and the steps 206 to 209 are processes of the terminal processing each frame image after the first frame image.
206. The terminal determines a second gesture parameter corresponding to the target image according to the facial features included in the target image, a first corresponding relation between the facial three-dimensional model in the last frame image based on the target image and the facial features, and the first gesture parameter corresponding to the last frame image, wherein the target image is a non-first frame image.
In the embodiment of the present application, the target image may be any non-first frame image. The terminal may perform face tracking on a two-dimensional coordinate point included in a facial feature included in a previous frame image of the target image before processing the target image. When the terminal processes the target image, the terminal can determine the two-dimensional coordinate points included by the facial features included in the target image according to the result of the face tracking. The terminal may further acquire a first correspondence of the facial three-dimensional model of the last frame image and the facial features determined when the last frame image of the target image is processed. The terminal can also acquire a first posture parameter corresponding to the last frame of image of the target image. Therefore, the terminal can determine the second gesture parameter corresponding to the target image according to the obtained facial features, the corresponding relation between the facial three-dimensional model and the facial features and the first gesture parameter.
In an alternative implementation, the pose parameters may include zoom parameters, rotation parameters, and translation parameters. Correspondingly, the step of determining the second gesture parameter corresponding to the target image by the terminal according to the obtained facial feature, the corresponding relation between the facial three-dimensional model and the facial feature and the first gesture parameter may be: for each two-dimensional coordinate point included in the facial feature included in the target image, the terminal may determine, on the facial three-dimensional model, a three-dimensional coordinate point corresponding to the two-dimensional coordinate point included in the facial feature according to the correspondence between the facial three-dimensional model and the facial feature, and determine, based on the coordinate values of the corresponding two-dimensional coordinate point and the three-dimensional coordinate point, the first scaling parameter included in the first pose parameter, the first rotation parameter, and the first translation parameter, a minimum value of the second scaling parameter, the second rotation parameter, and the second translation parameter included in the second pose parameter. Because the first gesture parameter corresponding to the previous frame of image is introduced, which is equivalent to adding the time sequence constraint of the previous frame of image during operation, the change between the first gesture parameter and the second gesture parameter is relatively smooth, and jump is avoided, so that the minimum values of the second scaling parameter, the second rotation parameter and the second translation parameter are the optimal solutions of the second scaling parameter, the second rotation parameter and the second translation parameter.
In an alternative implementation, the terminal may calculate the optimal solution of the second scaling parameter, the second rotation parameter, and the second translation parameter included in the second gesture parameter through formula (5).
Figure BDA0002252512650000141
Wherein K represents the number of coordinate pairs having a correspondence relationship, s t Representing a second scaling parameter, R t A second rotation parameter is indicated and is indicated,
Figure BDA0002252512650000142
coordinate values representing three-dimensional coordinate points corresponding to two-dimensional coordinate points included in the facial feature on the three-dimensional model of the face, T t Representing a second translation parameter, V k Representing three-dimensional coordinate points corresponding to two-dimensional coordinate points included in the facial feature on the three-dimensional model of the face, I k Coordinate values, lambda, representing two-dimensional coordinate points included in the facial feature s Weight coefficient s representing scaling parameter t-1 Represents a first scaling parameter lambda R Weight coefficient representing rotation parameter, R t-1 Represents a first rotation parameter lambda T Weight coefficient representing translation parameter, T t-1 Representing a first translation parameter.
207. And the terminal adjusts the face three-dimensional model according to the second gesture parameters.
In the embodiment of the present application, the process of adjusting the face three-dimensional model by the terminal according to the second parameter may be implemented through sub-steps 207a to 207 c.
207a, the terminal may adjust the pose of the facial three-dimensional model according to the second pose parameter.
After determining the second gesture parameter, the terminal may adjust the gesture of the face three-dimensional model based on the second rotation parameter included in the second gesture parameter. Wherein the second rotation parameter may be represented as a rotation matrix.
207b, the terminal may determine a second correspondence of the facial features and the adjusted facial three-dimensional model according to the pose-adjusted facial three-dimensional model.
Since the above-mentioned first correspondence represents the correspondence between the facial three-dimensional model determined in the previous frame image of the target image and the facial features, and the facial three-dimensional model is changed after adjustment with respect to the facial three-dimensional model in the previous frame image, it is necessary to determine the second correspondence between the facial features included in the target image and the adjusted facial three-dimensional model.
In an alternative implementation, the facial features may include facial features and contour features. Correspondingly, the step of determining the second correspondence between the facial features and the adjusted facial three-dimensional model by the terminal according to the adjusted facial three-dimensional model may be: the terminal can keep the corresponding relation between the three-dimensional coordinate points in the face three-dimensional model and the two-dimensional coordinate points included by the five-sense organs. Then, the terminal determines a plurality of three-dimensional coordinate points from the adjusted face three-dimensional model according to a preset sampling sequence. The terminal can correspond the three-dimensional coordinate points determined from the face three-dimensional model with the two-dimensional coordinate points included by the outline features according to the preset sampling sequence, and a second corresponding relation between the face features and the adjusted face three-dimensional model is obtained. Because the corresponding relation between the two-dimensional coordinate points included in the five-sense organ features included in the target image and the three-dimensional coordinate points in the face three-dimensional model is kept unchanged, and then a plurality of three-dimensional coordinate points are determined from the face three-dimensional model, when the gesture of the face three-dimensional model is changed, the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included in the outline features on the face three-dimensional model can be updated, so that the second corresponding relation is more in line with the corresponding relation between the face features included in the target image and the face three-dimensional model with the gesture adjusted.
In an alternative implementation manner, the terminal may define several groups of parallel line segments on the three-dimensional face model according to the topological structure of the model, where the several groups of parallel line segments are used to determine three-dimensional coordinate points corresponding to the two-dimensional coordinate points included in the outline feature on the three-dimensional face model. The step of determining a plurality of three-dimensional coordinate points from the adjusted face three-dimensional model by the terminal according to the preset sampling sequence may be: the terminal can acquire at least one parallel line segment preset in the face three-dimensional model. And then the terminal can select a three-dimensional coordinate point positioned on the boundary of the three-dimensional model of the face from each parallel line segment according to a preset sampling sequence, wherein the three-dimensional coordinate point corresponds to the two-dimensional coordinate point included by the outline feature.
207c, the terminal determines the adjusted face three-dimensional model based on the second corresponding relation.
In this embodiment of the present application, the terminal may determine the adjusted face three-dimensional model based on the second correspondence, and the second feature parameter and the second expression parameter corresponding to the face three-dimensional model determined in the previous frame of image. By introducing the second characteristic parameters and the second expression parameters, the determined three-dimensional model of the face is smoother than the three-dimensional model of the face determined in the previous frame of image.
In an optional implementation manner, the step of determining, by the terminal, the adjusted face three-dimensional model based on the second correspondence may be: the terminal may acquire the second feature parameter and the second expression parameter determined in the previous frame of the target image. The terminal may determine, according to the coordinate values of the two-dimensional coordinate points included in the facial feature, the coordinate values of the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included in the facial feature on the facial three-dimensional model, the second feature parameter, and the second expression parameter, the first feature parameter and the first expression parameter with the smallest change relative to the second feature parameter and the second expression parameter, as an optimal solution of the first feature parameter and the first expression parameter corresponding to the adjusted facial three-dimensional model. And the terminal can calculate the adjusted face three-dimensional model according to the obtained first characteristic parameters and the first expression parameters.
In an alternative implementation, the terminal may calculate the optimal solution of the first feature parameter and the first expression parameter by the following formula (6):
Figure BDA0002252512650000151
where K represents the number of coordinate pairs having a correspondence, s represents a scaling parameter, R represents a rotation parameter,
Figure BDA0002252512650000152
representing an average face three-dimensional model, A id Principal component, alpha, representing characteristic dimensions id,t Representing a first characteristic parameter, A exp Principal component representing expression dimension, alpha exp,t Representing a first expression parameter, V k Representing three-dimensional coordinate points corresponding to two-dimensional coordinate points included in the facial feature on the three-dimensional model of the face, T representing the translation parameter, ik representing the coordinate values of the two-dimensional coordinate points included in the facial feature, lambda exp Weight coefficient, alpha, representing expression parameter exp,t-1 Representing a second expression parameter lambda id Weight coefficient, alpha, representing characteristic parameters id,t-1 Representing a second characteristic parameter.
In one possible implementation manner, after determining the adjusted face three-dimensional model, the terminal may update the second gesture parameter according to the coordinate values of the two-dimensional coordinate points included in the face feature, the coordinate values of the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included in the face feature on the face three-dimensional model, and the first gesture parameter, for example, the terminal may calculate the second gesture parameter through the formula (5). After obtaining the updated second posture parameter, the terminal may adjust the posture of the face three-dimensional model according to the updated second posture parameter, and the specific adjustment manner may be the same as the manner described in the above sub-step 207 a. The terminal can update the second corresponding relation according to the adjusted face three-dimensional model, update the second gesture parameter again based on the second corresponding relation, and repeat the steps of iterative updating of the second gesture parameter, the second corresponding relation and the gesture of the face three-dimensional model until a preset stop updating condition is reached. The preset update stopping condition may be that the second gesture parameter converges or reaches a preset iteration number, which is not specifically limited in the embodiment of the present application.
208. And the terminal determines coordinate values of at least one projection point included in the target image according to the second gesture parameters and the adjusted face three-dimensional model, wherein the ratio of the face area covered by the at least one projection point to the total face area included in the target image is greater than a target ratio threshold.
The terminal may project dense three-dimensional coordinate points included in the adjusted face three-dimensional model into the target image according to the second scaling parameter, the second rotation parameter, and the translation parameter included in the second pose parameter, thereby determining coordinate values of at least one projection point included in the target image. The specific process may refer to step 204, and will not be described herein.
209. And the terminal displays the three-dimensional image to be displayed in the target image according to the coordinate value of at least one projection point.
The specific manner may be referred to step 205, and will not be described herein.
In order to enable the image display processes described in the above steps 201 to 209 to be sequentially performed, it is also possible to perform each of the processes according to whether or not the image to be processed is the first frame image as shown in fig. 7. Fig. 7 is a flowchart of another image display method according to an embodiment of the present application. When the terminal processes the image, it may first determine whether the image to be processed is a first frame image, and when the image to be processed is the first frame image, the terminal may execute steps 201 to 205. Namely, the terminal firstly carries out face detection and face registration on the first frame image, and then estimates a third gesture parameter according to coordinate points included by the detected five-sense organ features. And then the terminal selects a three-dimensional coordinate point corresponding to the two-dimensional coordinate point included by the outline feature from the face three-dimensional model according to the third gesture parameter. And then the terminal updates the third gesture parameters according to the five-sense organ characteristics and the outline characteristics. And then, the terminal updates the fourth corresponding relation according to the updated third posture parameter, and determines an adjusted face three-dimensional model according to the updated fourth corresponding relation. When the third posture parameter is not converged, the above process of updating the third posture parameter and the fourth correspondence may be repeated until the third posture parameter is converged. Then, coordinate values of at least one second projection point included in the first frame image are determined. Finally, a three-dimensional image is displayed on the first frame image. When the image to be processed is the target image, i.e., the non-first frame image, the terminal may perform steps 206 to 209 described above. The terminal firstly carries out face tracking on the target image, and then determines a second gesture parameter corresponding to the target image based on a first corresponding relation between a face three-dimensional model and facial features in the previous frame image and a first gesture parameter corresponding to the previous frame image. The processing procedure of the terminal on the target image is similar, and will not be described again here.
In the embodiment of the application, when the target image is processed, according to the facial features included in the target image, the corresponding relation between the facial three-dimensional model and the facial features determined in the previous frame image and the first posture parameter corresponding to the previous frame image are combined to determine the second posture parameter corresponding to the target image. Because the at least one first projection point is the dense two-dimensional coordinate point through the second gesture parameter and the face three-dimensional model adjusted according to the second gesture parameter, when the terminal displays the three-dimensional image, the dense three-dimensional coordinate point which is included on the face three-dimensional model and accords with the target image can be projected onto the two-dimensional image, so that the three-dimensional image is displayed, and the details of the displayed three-dimensional image are more attached to the details of the face. In addition, the face three-dimensional model has high universality, is convenient and flexible to adjust, and has accurate adjustment results, so that the method can be widely applied to the display of three-dimensional images.
Fig. 8 is a block diagram of an image display apparatus according to an embodiment of the present application, as shown in fig. 8, including: a determining module 801, an adjusting module 802 and a displaying module 803.
A determining module 801, configured to determine, according to a facial feature included in a target image, a second pose parameter corresponding to the target image based on a first correspondence between a three-dimensional model of a face in a previous frame of the target image and the facial feature, and a first pose parameter corresponding to the previous frame of image;
An adjustment module 802 configured to adjust the facial three-dimensional model according to the second pose parameters;
a determining module 801, configured to determine coordinate values of at least one projection point included in the target image according to the second pose parameter and the adjusted face three-dimensional model;
and a display module 803 configured to display a three-dimensional image to be displayed in the target image according to the coordinate values of the at least one first projection point.
In one possible implementation, the adjusting module 802 is further configured to adjust the pose of the face three-dimensional model according to the second pose parameter; determining a second corresponding relation between the facial features and the adjusted facial three-dimensional model according to the adjusted facial three-dimensional model; and determining the adjusted face three-dimensional model based on the second corresponding relation.
In one possible implementation, the facial features include facial features and contour features; the adjusting module 802 is further configured to keep the correspondence between the three-dimensional coordinate points in the face three-dimensional model and the two-dimensional coordinate points included in the five-sense features unchanged, and determine a plurality of three-dimensional coordinate points from the adjusted face three-dimensional model according to a preset sampling sequence; and corresponding the plurality of three-dimensional coordinate points in the face three-dimensional model with two-dimensional coordinate points included by the outline features according to a preset sampling sequence, and obtaining a second corresponding relation between the face features and the adjusted face three-dimensional model.
In a possible implementation manner, the adjustment module 802 is further configured to obtain at least one parallel line segment preset in the face three-dimensional model; and selecting a coordinate point positioned on the boundary of the three-dimensional model of the face from each parallel line segment according to a preset sampling sequence, wherein the coordinate point corresponds to the two-dimensional coordinate point included by the outline feature.
In a possible implementation, the adjustment module 802 is further configured to acquire a second feature parameter and a second expression parameter that determine a three-dimensional model of the face in a previous frame of the target image;
and determining the first characteristic parameters and the first expression parameters corresponding to the adjusted face three-dimensional model according to the coordinate values of the two-dimensional coordinate points included by the face features, the coordinate values of the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included by the face features on the face three-dimensional model, the second characteristic parameters and the second expression parameters.
In one possible implementation, the apparatus further includes:
the updating module is configured to update the second gesture parameters according to the coordinate values of the two-dimensional coordinate points included by the facial features, the coordinate values of the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included by the facial features on the three-dimensional model of the face and the first gesture parameters;
The adjustment module 802 is further configured to adjust the pose of the face three-dimensional model according to the updated second pose parameter;
and the updating module is further configured to update the second gesture parameter again until a preset updating stopping condition is reached.
In one possible implementation, the display module 803 is further configured to obtain a three-dimensional image to be displayed; and sequentially displaying at least one image point included in the three-dimensional image on the at least one first projection point according to the coordinate values of the at least one first projection point.
In one possible implementation, the apparatus further includes:
the tracking module is configured to carry out face tracking on coordinate points of facial features included in the previous frame of image;
the determining module 801 is further configured to determine, according to a result of the face tracking, a two-dimensional coordinate point included in the facial feature included in the target image.
In one possible implementation, the apparatus further includes:
the generating module is configured to generate a face three-dimensional model according to a preset third characteristic parameter and a third expression parameter;
the determining module 801 is further configured to determine a third pose parameter corresponding to the first frame image according to the fifth feature included in the first frame image and a third correspondence between the face three-dimensional model and the fifth feature;
An adjustment module 802 further configured to adjust the facial three-dimensional model according to the third pose parameter;
the determining module 801 is further configured to determine coordinate values of at least one second projection point included in the first frame image according to the third pose parameter and the adjusted face three-dimensional model;
the display module 803 is further configured to display the three-dimensional image in the first frame image according to the coordinate values of the at least one second projection point.
In one possible implementation, the apparatus further includes:
the detection registration module is configured to perform face detection and face registration on the first frame image and determine at least one two-dimensional coordinate point included by the five-element feature included in the first frame image.
In one possible implementation, the adjusting module 802 is further configured to adjust the pose of the face three-dimensional model according to the third pose parameter; acquiring outline features included in a first frame image, wherein the outline features and five sense organs features form facial features; selecting three-dimensional coordinate points corresponding to two-dimensional coordinate points included by the outline features from the adjusted face three-dimensional model to obtain a fourth corresponding relation between the face features and the face three-dimensional model in the first frame image; and adjusting the face three-dimensional model based on the fourth corresponding relation.
In one possible implementation, the adjustment module 802 is further configured to update the third pose parameter according to the five sense organs feature, the contour feature, and the fourth correspondence; according to the updated third posture parameter, the posture of the face three-dimensional model is adjusted; updating the fourth corresponding relation according to the three-dimensional model of the face after the posture adjustment; and determining the adjusted face three-dimensional model according to the updated fourth corresponding relation and the updated third posture parameter.
In the embodiment of the application, when the target image is processed, according to the facial features included in the target image, the corresponding relation between the facial three-dimensional model and the facial features determined in the previous frame image and the first posture parameter corresponding to the previous frame image are combined to determine the second posture parameter corresponding to the target image. Since at least one first projection point is determined by the second pose parameter and the face three-dimensional model adjusted according to the second pose parameter, and a three-dimensional image is displayed in the target image according to the coordinate value of the at least one first projection point. When the terminal displays the three-dimensional image, the three-dimensional model of the face conforming to the target image can be obtained through adjusting the three-dimensional model of the face, so that the three-dimensional image is displayed. The face three-dimensional model has high universality, is convenient and flexible to adjust, has accurate adjustment results, and can be widely applied to the display of three-dimensional images.
Fig. 9 is a schematic structural diagram of a terminal 900 according to an embodiment of the present invention. The terminal 900 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 900 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 901 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 901 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 901 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 901 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
The memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one instruction for execution by processor 901 to implement the presentation method of the game resource file provided by the method embodiments of the present invention.
In some embodiments, the terminal 900 may further optionally include: a peripheral interface 903, and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 903 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 904, a display 905, a camera 906, audio circuitry 907, positioning components 908, and a power source 909.
The peripheral interface 903 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 901, the memory 902, and the peripheral interface 903 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 904 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 904 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication ) related circuits, which the present invention is not limited to.
The display 905 is used to display a UI (user interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 905 is a touch display, the display 905 also has the ability to capture touch signals at or above the surface of the display 905. The touch signal may be input as a control signal to the processor 901 for processing. At this time, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 905 may be one, providing a front panel of the terminal 900; in other embodiments, the display 905 may be at least two, respectively disposed on different surfaces of the terminal 900 or in a folded design; in some embodiments, the display 905 may be a flexible display disposed on a curved surface or a folded surface of the terminal 900. Even more, the display 905 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 905 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 906 is used to capture images or video. Optionally, the camera assembly 906 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and VR (virtual reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be plural and disposed at different portions of the terminal 900. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 907 may also include a headphone jack.
The location component 908 is used to locate the current geographic location of the terminal 900 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 908 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, the Granati system of Russia, or the Galileo system of the European Union.
The power supply 909 is used to supply power to the various components in the terminal 900. The power supply 909 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 909 includes a rechargeable battery, the rechargeable battery can support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 900 can further include one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyroscope sensor 912, pressure sensor 913, fingerprint sensor 914, optical sensor 915, and proximity sensor 916.
The acceleration sensor 911 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 900. For example, the acceleration sensor 911 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 901 may control the display 905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 911. The acceleration sensor 911 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 912 may detect a body direction and a rotation angle of the terminal 900, and the gyro sensor 912 may collect a 3D motion of the user on the terminal 900 in cooperation with the acceleration sensor 911. The processor 901 may implement the following functions according to the data collected by the gyro sensor 912: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 913 may be provided at a side frame of the terminal 900 and/or at a lower layer of the display 905. When the pressure sensor 913 is provided at a side frame of the terminal 900, a grip signal of the user to the terminal 900 may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 913. When the pressure sensor 913 is provided at the lower layer of the display 905, the processor 901 performs control of the operability control on the UI interface according to the pressure operation of the user on the display 905. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 914 is used for collecting the fingerprint of the user, and the processor 901 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 914 may be provided on the front, back or side of the terminal 900. When a physical key or a vendor Logo is provided on the terminal 900, the fingerprint sensor 914 may be integrated with the physical key or the vendor Logo.
The optical sensor 915 is used to collect the intensity of ambient light. In one embodiment, the processor 901 may control the display brightness of the display panel 905 based on the intensity of ambient light collected by the optical sensor 915. Specifically, when the ambient light intensity is high, the display luminance of the display screen 905 is turned up; when the ambient light intensity is low, the display luminance of the display panel 905 is turned down. In another embodiment, the processor 901 may also dynamically adjust the shooting parameters of the camera assembly 906 based on the ambient light intensity collected by the optical sensor 915.
A proximity sensor 916, also referred to as a distance sensor, is typically provided on the front panel of the terminal 900. Proximity sensor 916 is used to collect the distance between the user and the front of terminal 900. In one embodiment, when the proximity sensor 916 detects that the distance between the user and the front face of the terminal 900 gradually decreases, the processor 901 controls the display 905 to switch from the bright screen state to the off screen state; when the proximity sensor 916 detects that the distance between the user and the front surface of the terminal 900 gradually increases, the processor 901 controls the display 905 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 9 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
The embodiment of the application also provides a storage medium, which is applied to the terminal, and at least one program code is stored in the storage medium and is used for executing the image display method in the embodiment of the application.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, and the program may be stored in a storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to be limiting, but rather is intended to cover any and all modifications, equivalents, alternatives, and improvements within the spirit and principles of the present application.

Claims (15)

1. An image display method, the method comprising:
determining a second gesture parameter corresponding to a target image according to facial features included in the target image, a first corresponding relation between a facial three-dimensional model in a previous frame image of the target image and the facial features, and a first gesture parameter corresponding to the previous frame image;
According to the second gesture parameters, the face three-dimensional model is adjusted;
determining coordinate values of at least one projection point included in the target image according to the second gesture parameters and the adjusted face three-dimensional model;
and displaying the three-dimensional image to be displayed in the target image according to the coordinate value of the at least one first projection point.
2. The method of claim 1, wherein said adjusting the facial three-dimensional model in accordance with the second pose parameters comprises:
according to the second gesture parameters, the gesture of the face three-dimensional model is adjusted;
determining a second corresponding relation between the facial features and the adjusted facial three-dimensional model according to the adjusted facial three-dimensional model;
and determining the adjusted face three-dimensional model based on the second corresponding relation.
3. The method of claim 2, wherein the facial features include facial features and contour features;
the determining, according to the adjusted three-dimensional model of the face, a second correspondence between the facial features and the adjusted three-dimensional model of the face, including:
maintaining the corresponding relation between the three-dimensional coordinate points in the face three-dimensional model and the two-dimensional coordinate points included by the five-sense features unchanged,
Determining a plurality of three-dimensional coordinate points from the adjusted face three-dimensional model according to a preset sampling sequence;
and corresponding the plurality of three-dimensional coordinate points in the face three-dimensional model with two-dimensional coordinate points included by the outline features according to the preset sampling sequence to obtain a second corresponding relation between the face features and the adjusted face three-dimensional model.
4. A method according to claim 3, wherein determining a plurality of three-dimensional coordinate points from the adjusted three-dimensional model of the face in a predetermined sampling order comprises:
acquiring at least one parallel line segment preset in the face three-dimensional model;
and selecting a coordinate point positioned on the boundary of the three-dimensional model of the face from each parallel line segment according to a preset sampling sequence, wherein the coordinate point corresponds to a two-dimensional coordinate point included by the outline feature.
5. A method according to claim 3, wherein said determining an adjusted facial three-dimensional model based on said second correspondence comprises:
acquiring a second characteristic parameter and a second expression parameter of a face three-dimensional model determined in a previous frame of image of the target image;
and determining a first characteristic parameter and a first expression parameter corresponding to the adjusted facial three-dimensional model according to the coordinate values of the two-dimensional coordinate points included by the facial features, the coordinate values of the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included by the facial features on the facial three-dimensional model, the second characteristic parameter and the second expression parameter.
6. The method of claim 5, wherein after determining the first feature parameter and the first expression parameter corresponding to the adjusted three-dimensional model of the face, the method further comprises:
updating the second gesture parameters according to the coordinate values of the two-dimensional coordinate points included by the facial features, the coordinate values of the three-dimensional coordinate points corresponding to the two-dimensional coordinate points included by the facial features on the three-dimensional model of the face and the first gesture parameters;
according to the updated second posture parameters, the posture of the face three-dimensional model is adjusted;
and updating the second gesture parameters again until a preset updating stopping condition is reached.
7. The method according to claim 1, wherein displaying the three-dimensional image to be displayed in the target image according to the coordinate values of the at least one first projection point includes:
acquiring a three-dimensional image to be displayed;
and displaying at least one image point included in the three-dimensional image on the at least one first projection point in sequence according to the coordinate value of the at least one first projection point.
8. The method according to claim 1, wherein before the determining the second pose parameter corresponding to the target image according to the facial feature included in the target image, based on the first correspondence between the facial three-dimensional model in the previous frame image of the target image and the facial feature, and the first pose parameter corresponding to the previous frame image, the method further comprises:
Carrying out face tracking on coordinate points of facial features included in the previous frame of image;
and determining two-dimensional coordinate points included by facial features included by the target image according to the result of the face tracking.
9. The method according to claim 1, wherein before the determining the second pose parameter corresponding to the target image according to the facial feature included in the target image, based on the first correspondence between the facial three-dimensional model in the previous frame image of the target image and the facial feature, and the first pose parameter corresponding to the previous frame image, the method further comprises:
generating the three-dimensional model of the face according to a preset third characteristic parameter and a third expression parameter;
determining a third posture parameter corresponding to the first frame image according to the five-sense feature included in the first frame image and a third corresponding relation between the face three-dimensional model and the five-sense feature;
according to the third gesture parameters, the face three-dimensional model is adjusted;
determining coordinate values of at least one second projection point included in the first frame image according to the third gesture parameters and the adjusted face three-dimensional model;
and displaying the three-dimensional image in the first frame image according to the coordinate value of the at least one second projection point.
10. The method according to claim 9, wherein before determining the third pose parameter corresponding to the first frame image according to the fifth feature included in the first frame image and the third correspondence between the face three-dimensional model and the fifth feature, the method further comprises:
and carrying out face detection and face registration on the first frame image, and determining at least one two-dimensional coordinate point included by the five-element features included in the first frame image.
11. The method of claim 9, wherein said adjusting the facial three-dimensional model in accordance with the third pose parameters comprises:
according to the third gesture parameters, the gesture of the face three-dimensional model is adjusted;
acquiring contour features included in the first frame image, wherein the contour features and the five sense organs features form the facial features;
selecting three-dimensional coordinate points corresponding to two-dimensional coordinate points included by the outline features from the adjusted face three-dimensional model to obtain a fourth corresponding relation between the face features and the face three-dimensional model in the first frame image;
and adjusting the face three-dimensional model based on the fourth corresponding relation.
12. The method of claim 11, wherein the adjusting the facial three-dimensional model based on the fourth correspondence comprises:
Updating the third posture parameter according to the five-sense organ feature, the outline feature and the fourth corresponding relation;
according to the updated third gesture parameters, the gesture of the face three-dimensional model is adjusted;
updating the fourth corresponding relation according to the three-dimensional model of the face after the posture adjustment;
and determining the adjusted face three-dimensional model according to the updated fourth corresponding relation and the updated third posture parameter.
13. An image display device, the device comprising:
the determining module is configured to determine a second gesture parameter corresponding to a target image according to facial features included in the target image, a first corresponding relation between a facial three-dimensional model in a previous frame image of the target image and the facial features, and a first gesture parameter corresponding to the previous frame image;
an adjustment module configured to adjust the facial three-dimensional model according to the second pose parameter;
the determining module is further configured to determine coordinate values of at least one projection point included in the target image according to the second gesture parameter and the adjusted face three-dimensional model;
and the display module is configured to display a three-dimensional image to be displayed in the target image according to the coordinate value of the at least one first projection point.
14. A terminal comprising a processor and a memory for storing at least one piece of program code, the at least one piece of program code being loaded by the processor and executing the image display method of any one of claims 1 to 8.
15. A storage medium storing at least one piece of program code for loading and executing the image display method of any one of claims 1 to 8 by a processor.
CN201911039786.7A 2019-10-29 2019-10-29 Image display method, device, terminal and storage medium Active CN110796083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911039786.7A CN110796083B (en) 2019-10-29 2019-10-29 Image display method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911039786.7A CN110796083B (en) 2019-10-29 2019-10-29 Image display method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110796083A CN110796083A (en) 2020-02-14
CN110796083B true CN110796083B (en) 2023-07-04

Family

ID=69442045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911039786.7A Active CN110796083B (en) 2019-10-29 2019-10-29 Image display method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110796083B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233142A (en) * 2020-09-29 2021-01-15 深圳宏芯宇电子股份有限公司 Target tracking method, device and computer readable storage medium
CN113223188B (en) * 2021-05-18 2022-05-27 浙江大学 Video face fat and thin editing method
CN116527993A (en) * 2022-01-24 2023-08-01 脸萌有限公司 Video processing method, apparatus, electronic device, storage medium and program product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101672A (en) * 2007-07-13 2008-01-09 中国科学技术大学 Stereo vision three-dimensional human face modelling approach based on dummy image
KR20080095680A (en) * 2007-04-25 2008-10-29 포항공과대학교 산학협력단 Method for recognizing face gesture using 3-dimensional cylinder head model
CN103530900A (en) * 2012-07-05 2014-01-22 北京三星通信技术研究有限公司 Three-dimensional face model modeling method, face tracking method and equipment
CN104966316A (en) * 2015-05-22 2015-10-07 腾讯科技(深圳)有限公司 3D face reconstruction method, apparatus and server
GB201613959D0 (en) * 2015-08-14 2016-09-28 Metail Ltd Methods of generating personalized 3d head models or 3d body models
CN109767487A (en) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 Face three-dimensional rebuilding method, device, electronic equipment and storage medium
KR20190079503A (en) * 2017-12-27 2019-07-05 한국전자통신연구원 Apparatus and method for registering face posture for face recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140009465A1 (en) * 2012-07-05 2014-01-09 Samsung Electronics Co., Ltd. Method and apparatus for modeling three-dimensional (3d) face, and method and apparatus for tracking face
US10540817B2 (en) * 2017-03-03 2020-01-21 Augray Pvt. Ltd. System and method for creating a full head 3D morphable model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080095680A (en) * 2007-04-25 2008-10-29 포항공과대학교 산학협력단 Method for recognizing face gesture using 3-dimensional cylinder head model
CN101101672A (en) * 2007-07-13 2008-01-09 中国科学技术大学 Stereo vision three-dimensional human face modelling approach based on dummy image
CN103530900A (en) * 2012-07-05 2014-01-22 北京三星通信技术研究有限公司 Three-dimensional face model modeling method, face tracking method and equipment
CN104966316A (en) * 2015-05-22 2015-10-07 腾讯科技(深圳)有限公司 3D face reconstruction method, apparatus and server
WO2016188318A1 (en) * 2015-05-22 2016-12-01 腾讯科技(深圳)有限公司 3d human face reconstruction method, apparatus and server
GB201613959D0 (en) * 2015-08-14 2016-09-28 Metail Ltd Methods of generating personalized 3d head models or 3d body models
KR20190079503A (en) * 2017-12-27 2019-07-05 한국전자통신연구원 Apparatus and method for registering face posture for face recognition
CN109767487A (en) * 2019-01-04 2019-05-17 北京达佳互联信息技术有限公司 Face three-dimensional rebuilding method, device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
二维及三维多模人脸数据库构建;傅泽华;龚勋;李天瑞;;数据采集与处理(第03期);全文 *

Also Published As

Publication number Publication date
CN110796083A (en) 2020-02-14

Similar Documents

Publication Publication Date Title
US11678734B2 (en) Method for processing images and electronic device
US11205282B2 (en) Relocalization method and apparatus in camera pose tracking process and storage medium
US20200294243A1 (en) Method, electronic device and storage medium for segmenting image
CN109978989B (en) Three-dimensional face model generation method, three-dimensional face model generation device, computer equipment and storage medium
US11517099B2 (en) Method for processing images, electronic device, and storage medium
US11436779B2 (en) Image processing method, electronic device, and storage medium
CN109308727B (en) Virtual image model generation method and device and storage medium
CN109977775B (en) Key point detection method, device, equipment and readable storage medium
CN110263617B (en) Three-dimensional face model obtaining method and device
CN111028144B (en) Video face changing method and device and storage medium
CN109558837B (en) Face key point detection method, device and storage medium
CN110796083B (en) Image display method, device, terminal and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN113763228B (en) Image processing method, device, electronic equipment and storage medium
CN110956580B (en) Method, device, computer equipment and storage medium for changing face of image
WO2020233403A1 (en) Personalized face display method and apparatus for three-dimensional character, and device and storage medium
CN111723803B (en) Image processing method, device, equipment and storage medium
CN112337105B (en) Virtual image generation method, device, terminal and storage medium
CN113160031B (en) Image processing method, device, electronic equipment and storage medium
CN110853124B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN109767482B (en) Image processing method, device, electronic equipment and storage medium
CN109345636B (en) Method and device for obtaining virtual face image
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN109685881B (en) Volume rendering method and device and intelligent equipment
CN111797754A (en) Image detection method, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022086

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant