CN108765551B - Method and terminal for realizing face pinching of 3D model - Google Patents

Method and terminal for realizing face pinching of 3D model Download PDF

Info

Publication number
CN108765551B
CN108765551B CN201810461808.8A CN201810461808A CN108765551B CN 108765551 B CN108765551 B CN 108765551B CN 201810461808 A CN201810461808 A CN 201810461808A CN 108765551 B CN108765551 B CN 108765551B
Authority
CN
China
Prior art keywords
model
adjusted
face
feature points
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810461808.8A
Other languages
Chinese (zh)
Other versions
CN108765551A (en
Inventor
刘德建
林琛
张展展
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Tianyi Network Technology Co ltd
Original Assignee
Fujian Tianyi Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Tianyi Network Technology Co ltd filed Critical Fujian Tianyi Network Technology Co ltd
Priority to CN201810461808.8A priority Critical patent/CN108765551B/en
Publication of CN108765551A publication Critical patent/CN108765551A/en
Application granted granted Critical
Publication of CN108765551B publication Critical patent/CN108765551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention provides a method and a terminal for realizing 3D face pinching, which are used for obtaining adjustment parameters according to coordinates corresponding to a part to be adjusted on a 3D model in an uploaded picture and an adjustment reference of the part to be adjusted, adjusting the part to be adjusted on the 3D model according to the adjustment parameters, realizing 3D model face pinching, realizing adjustment according to skeleton of the model, realizing accurate 3D face pinching, having vivid 3D effect and high adaptability, being suitable for photos with different face types, and realizing face pinching of the 3D model according to the photos with different face types.

Description

Method and terminal for realizing face pinching of 3D model
Technical Field
The invention relates to the field of 3D model editing, in particular to a method and a terminal for realizing face pinching of a 3D model.
Background
In the prior art, a common way for realizing face pinching of a 3D model is to change a material ball map of the model. The user uses a mobile phone or other equipment capable of taking pictures to form a stereoscopic user face model through the acquired pictures, however, the method cannot finely realize the face pinching effect, and the method is not adaptive to different face types.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: provided are a method and a terminal for realizing face pinching of a 3D model with fineness and high applicability.
In order to solve the technical problems, the invention adopts a technical scheme that:
a method for realizing face pinching of a 3D model comprises the following steps:
s1, acquiring the uploaded picture, and determining coordinates corresponding to a predetermined number of feature points on the 3D model of the face to be pinched in the picture to obtain a first coordinate set;
s2, determining coordinates corresponding to the part to be adjusted on the 3D model of the face to be pinched in the first coordinate set and an adjustment reference of the part to be adjusted;
and S3, obtaining an adjusting parameter according to the coordinate corresponding to the part to be adjusted and the adjusting reference of the part to be adjusted, and adjusting the part to be adjusted on the 3D model of the face to be pinched according to the adjusting parameter.
In order to solve the technical problem, the invention adopts another technical scheme as follows:
a terminal implementing a 3D model face-pinching, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
s1, acquiring the uploaded picture, and determining coordinates corresponding to a predetermined number of feature points on the 3D model of the face to be pinched in the picture to obtain a first coordinate set;
s2, determining coordinates corresponding to the part to be adjusted on the 3D model of the face to be pinched in the first coordinate set and an adjustment reference of the part to be adjusted;
and S3, obtaining an adjusting parameter according to the coordinate corresponding to the part to be adjusted and the adjusting reference of the part to be adjusted, and adjusting the part to be adjusted on the 3D model of the face to be pinched according to the adjusting parameter.
The invention has the beneficial effects that: according to the coordinate that the position corresponds and treat in the picture of uploading with the 3D model on treat the adjustment benchmark at adjustment position, obtain adjustment parameter, treat the adjustment position on the 3D model according to adjustment parameter and adjust, realize that the 3D model holds between the fingers the face, can realize adjusting according to the skeleton of model, not only can realize that accurate 3D holds between the fingers the face, have lifelike 3D effect, the suitability is high moreover, can be suitable for the photo that has different face types, realize holding between the fingers the face to the 3D model according to the photo that has different face types.
Drawings
FIG. 1 is a flowchart of a method for implementing 3D face-pinching in accordance with an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a terminal for implementing 3D face pinching according to an embodiment of the present invention;
description of reference numerals:
1. a terminal for realizing face pinching of a 3D model; 2. a memory; 3. a processor.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
The most key concept of the method is that adjustment parameters are obtained according to coordinates corresponding to the part to be adjusted on the 3D model in the uploaded picture and the adjustment reference of the part to be adjusted, the part to be adjusted is adjusted on the 3D model according to the adjustment parameters, and face pinching of the 3D model is achieved.
Referring to fig. 1, a method for implementing a 3D model face pinching includes the steps of:
s1, acquiring the uploaded picture, and determining coordinates corresponding to a predetermined number of feature points on the 3D model of the face to be pinched in the picture to obtain a first coordinate set;
s2, determining coordinates corresponding to the part to be adjusted on the 3D model of the face to be pinched in the first coordinate set and an adjustment reference of the part to be adjusted;
and S3, obtaining an adjusting parameter according to the coordinate corresponding to the part to be adjusted and the adjusting reference of the part to be adjusted, and adjusting the part to be adjusted on the 3D model of the face to be pinched according to the adjusting parameter.
From the above description, the beneficial effects of the present invention are: according to the coordinate that the position corresponds and treat in the picture of uploading with the 3D model on treat the adjustment benchmark at adjustment position, obtain adjustment parameter, treat the adjustment position on the 3D model according to adjustment parameter and adjust, realize that the 3D model holds between the fingers the face, can realize adjusting according to the skeleton of model, not only can realize that accurate 3D holds between the fingers the face, have lifelike 3D effect, the suitability is high moreover, can be suitable for the photo that has different face types, realize holding between the fingers the face to the 3D model according to the photo that has different face types.
Further, the step S1 is preceded by the steps of:
s01, receiving the uploaded pictures, judging whether the pictures have human faces or not, if so, executing a step S02, otherwise, returning to the step S01;
and S02, judging whether the face in the picture is within a preset range from the camera, if not, returning to the step S01, and if so, executing the step S1.
According to the description, the effectiveness of the follow-up 3D model face pinching can be ensured by judging whether the face exists in the picture and judging whether the face is shot in the preset range, and the invalid operation is avoided.
Further, the step S2 includes:
s21, adding a configuration file corresponding to the 3D model of the face to be pinched, wherein the configuration file comprises an adjustable part type, feature points corresponding to the part type, coordinates of the feature points corresponding to the part type in different proportions and reference feature points of the 3D model;
s22, determining the coordinates corresponding to the part to be adjusted in the first coordinate set according to the feature points corresponding to the part types;
and S23, determining an adjusting standard of the part to be adjusted on the 3D model of the face to be pinched according to the feature points corresponding to the part types, the coordinates of the feature points corresponding to the part types under different proportions and the reference feature points of the 3D model.
According to the description, the corresponding configuration file is set for the 3D model of the face to be pinched, parameters needed by the face pinching are obtained based on the configuration file, and the method is convenient and fast.
Further, the coordinates of the feature points corresponding to the part types at different scales in step S21 include:
and coordinates of feature points respectively corresponding to the maximum size, the minimum size and the intermediate value of the maximum size and the minimum size of the part corresponding to the part type.
As can be seen from the above description, by setting different proportions, it can be ensured that there is a reference datum when the 3D model is used for pinching the face, and the 3D model can be adapted.
Further, the step S23 includes:
the adjusting reference is as follows:
d1 b=((d0 m-d0 n)/(d1 m-d1 n))*(d1 i-d1 j);
d-1 b=((d0 m-d0 n)/(d-1 m-d-1 n))*(d-1 i-d-1 j);
in step S3, obtaining an adjustment parameter according to the coordinate corresponding to the portion to be adjusted and the adjustment reference of the portion to be adjusted includes:
calculating dk b=((d0 m-d0 n)/(dk m-dk n))*(dk i-dk j);
The adjustment parameters are as follows: (d)k b-(d0 i-d0 j))/((d1 b-d-1 b)/2);
Wherein d isy xAnd x represents the coordinates corresponding to the feature points, x represents different types of feature points, x is m, n represents the reference feature point of the 3D model, x is i, j represents the feature point corresponding to the part to be adjusted, y is 1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is maximum, y is-1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is minimum, y is 0 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is a middle value of maximum and minimum, and y is k represents the coordinates corresponding to the feature point in the picture.
From the above description, it can be known that feature points of the picture are calibrated, the position and size of the face skeleton are calculated through the algorithm, the picture is converted into the adjustment parameters of the skeleton and the mesh of the 3D model face after being processed, and the generated adjustment parameters can be used for adjusting the skeleton and the mesh of the 3D model face, so that a three-dimensional face model is formed, different face shapes can be adapted, and higher face pinching similarity can be obtained.
Referring to fig. 2, a terminal for implementing a 3D model face-pinching includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the following steps when executing the computer program:
s1, acquiring the uploaded picture, and determining coordinates corresponding to a predetermined number of feature points on the 3D model of the face to be pinched in the picture to obtain a first coordinate set;
s2, determining coordinates corresponding to the part to be adjusted on the 3D model of the face to be pinched in the first coordinate set and an adjustment reference of the part to be adjusted;
and S3, obtaining an adjusting parameter according to the coordinate corresponding to the part to be adjusted and the adjusting reference of the part to be adjusted, and adjusting the part to be adjusted on the 3D model of the face to be pinched according to the adjusting parameter.
From the above description, the beneficial effects of the present invention are: according to the coordinate that the position corresponds and treat in the picture of uploading with the 3D model on treat the adjustment benchmark at adjustment position, obtain adjustment parameter, treat the adjustment position on the 3D model according to adjustment parameter and adjust, realize that the 3D model holds between the fingers the face, can realize adjusting according to the skeleton of model, not only can realize that accurate 3D holds between the fingers the face, have lifelike 3D effect, the suitability is high moreover, can be suitable for the photo that has different face types, realize holding between the fingers the face to the 3D model according to the photo that has different face types.
Further, the step S1 is preceded by the steps of:
s01, receiving the uploaded pictures, judging whether the pictures have human faces or not, if so, executing a step S02, otherwise, returning to the step S01;
and S02, judging whether the face in the picture is within a preset range from the camera, if not, returning to the step S01, and if so, executing the step S1.
According to the description, the effectiveness of the follow-up 3D model face pinching can be ensured by judging whether the face exists in the picture and judging whether the face is shot in the preset range, and the invalid operation is avoided.
Further, the step S2 includes:
s21, adding a configuration file corresponding to the 3D model of the face to be pinched, wherein the configuration file comprises an adjustable part type, feature points corresponding to the part type, coordinates of the feature points corresponding to the part type in different proportions and reference feature points of the 3D model;
s22, determining the coordinates corresponding to the part to be adjusted in the first coordinate set according to the feature points corresponding to the part types;
and S23, determining an adjusting standard of the part to be adjusted on the 3D model of the face to be pinched according to the feature points corresponding to the part types, the coordinates of the feature points corresponding to the part types under different proportions and the reference feature points of the 3D model.
According to the description, the corresponding configuration file is set for the 3D model of the face to be pinched, parameters needed by the face pinching are obtained based on the configuration file, and the method is convenient and fast.
Further, the coordinates of the feature points corresponding to the part types at different scales in step S21 include:
and coordinates of feature points respectively corresponding to the maximum size, the minimum size and the intermediate value of the maximum size and the minimum size of the part corresponding to the part type.
As can be seen from the above description, by setting different proportions, it can be ensured that there is a reference datum when the 3D model is used for pinching the face, and the 3D model can be adapted.
Further, the step S23 includes:
the adjusting reference is as follows:
d1 b=((d0 m-d0 n)/(d1 m-d1 n))*(d1 i-d1 j);
d-1 b=((d0 m-d0 n)/(d-1 m-d-1 n))*(d-1 i-d-1 j);
in step S3, obtaining an adjustment parameter according to the coordinate corresponding to the portion to be adjusted and the adjustment reference of the portion to be adjusted includes:
calculating dk b=((d0 m-d0 n)/(dk m-dk n))*(dk i-dk j);
The adjustment parameters are as follows: (d)k b-(d0 i-d0 j))/((d1 b-d-1 b)/2);
Wherein d isy xAnd x represents the coordinates corresponding to the feature points, x represents different types of feature points, x is m, n represents the reference feature point of the 3D model, x is i, j represents the feature point corresponding to the part to be adjusted, y is 1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is maximum, y is-1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is minimum, y is 0 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is a middle value of maximum and minimum, and y is k represents the coordinates corresponding to the feature point in the picture.
From the above description, it can be known that feature points of the picture are calibrated, the position and size of the face skeleton are calculated through the algorithm, the picture is converted into the adjustment parameters of the skeleton and the mesh of the 3D model face after being processed, and the generated adjustment parameters can be used for adjusting the skeleton and the mesh of the 3D model face, so that a three-dimensional face model is formed, different face shapes can be adapted, and higher face pinching similarity can be obtained.
Example one
Referring to fig. 1, a method for implementing a 3D model face pinching includes the steps of:
s01, receiving the uploaded pictures, judging whether the pictures have human faces or not, if so, executing a step S02, otherwise, returning to the step S01;
s02, judging whether the face in the picture is within a preset range from a camera, if not, returning to the step S01, and if so, executing the step S1;
s1, acquiring the uploaded picture, and determining coordinates corresponding to a predetermined number of feature points on the 3D model of the face to be pinched in the picture to obtain a first coordinate set;
s2, determining coordinates corresponding to the part to be adjusted on the 3D model of the face to be pinched in the first coordinate set and an adjustment reference of the part to be adjusted;
specifically, the method comprises the following steps:
s21, adding a configuration file corresponding to the 3D model of the face to be pinched, wherein the configuration file comprises an adjustable part type, feature points corresponding to the part type, coordinates of the feature points corresponding to the part type in different proportions and reference feature points of the 3D model, and the coordinates of the feature points corresponding to the part type in different proportions comprise: coordinates of feature points respectively corresponding to the maximum size, the minimum size and the intermediate value of the maximum size and the minimum size of the part corresponding to the part type;
s22, determining the coordinates corresponding to the part to be adjusted in the first coordinate set according to the feature points corresponding to the part types;
s23, determining an adjusting standard of a part to be adjusted on the 3D model of the face to be pinched according to the feature points corresponding to the part types, the coordinates of the feature points corresponding to the part types under different proportions and the reference feature points of the 3D model;
the calculation formula of the adjustment reference is as follows:
d1 b=((d0 m-d0 n)/(d1 m-d1 n))*(d1 i-d1 j);
d-1 b=((d0 m-d0 n)/(d-1 m-d-1 n))*(d-1 i-d-1 j);
s3, obtaining an adjusting parameter according to the coordinate corresponding to the part to be adjusted and the adjusting reference of the part to be adjusted, and adjusting the part to be adjusted on the 3D model of the face to be pinched according to the adjusting parameter;
specifically, the adjustment parameters are calculated as follows:
calculating dk b=((d0 m-d0 n)/(dk m-dk n))*(dk i-dk j);
The adjustment parameters are as follows: (d)k b-(d0 i-d0 j))/((d1 b-d-1 b)/2);
Wherein d isy xAnd x represents the coordinates corresponding to the feature points, x represents different types of feature points, x is m, n represents the reference feature point of the 3D model, x is i, j represents the feature point corresponding to the part to be adjusted, y is 1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is maximum, y is-1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is minimum, y is 0 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is a middle value of maximum and minimum, and y is k represents the coordinates corresponding to the feature point in the picture.
Example two
Referring to fig. 2, a terminal 1 for implementing a 3D model face-pinching includes a memory 2, a processor 3, and a computer program stored on the memory 2 and executable on the processor 3, where the processor 3 implements the steps in the first embodiment when executing the computer program.
EXAMPLE III
The method for realizing the face pinching of the 3D model is applied to an actual scene:
receiving a photo shot by a mobile phone or other equipment capable of taking pictures, judging whether a face exists in the photo or not, if not, re-receiving, if so, judging whether the face is shot within a preset range from the camera or not, if not, re-receiving, if so, returning coordinates corresponding to 83 feature points of the face in the photo through a face + + face recognition algorithm to form a first coordinate set, wherein the 83 feature points correspond to the feature points on the 3D model of the face to be pinched, optionally, other points can be selected, such as 106 points and 64 points, but 83 points are preferred, and the accuracy and the efficiency can be guaranteed due to the 83 points;
adding a configuration file corresponding to the 3D model of the face to be pinched, wherein the configuration file can be in a json format and comprises the following fields:
Figure BDA0001661090730000081
wherein, adjustable position includes: head size, head length, head width, neck size, face length, face size, arch of eyebrows protruding, eyebrow height, eyebrow spacing, eyebrow angle, eye position, eye spacing, eye size, upper eyelid opening and closing, lower eyelid opening and closing, eye tilt, upper eyelid shape, lower eyelid shape, outer canthus height, inner canthus height, outer canthus spacing, inner canthus spacing, eye protrusion, pupil size, pupillary spacing, nose size, nose bulge, nose position, nostril size, bridge of nose height, nose tip orientation, nose size, nose shape, mouth position, mouth size, valley height, upper lip thickness, lower lip thickness, upper lip shape, lower lip shape, mouth corner position, mouth corner spacing, chin width, chin length, chin width;
the coordinates of the feature points corresponding to the parts with the corresponding sizes can be obtained through the file field, and when the scale is 1, -1, and 0 respectively represents that the size of the part corresponding to the part type is maximum, the size is minimum, and the size is the middle value of the maximum and minimum, the coordinates of the corresponding feature points are obtained;
the reference characteristic points can be selected from 83 characteristic points according to requirements;
the point for determining the size of the adjustable portion may be selected from 83 feature points as needed;
based on the scanned picture, each adjustable part is calculated and adjusted, and the following description takes the adjustment of the eye as an example:
11 points and 2 points in 83 feature points are selected as reference feature points, the distance between the two points, namely the subtraction of x-axis coordinates is used as a size reference length, and the size of the eye is determined to be 25 points and 21 points;
determining the values of the characteristic points under different scale values according to the file fields of the configuration file:
when scale is 0, the x-axis (since the left and right size portions of the eye are adjusted, the x-axis coordinates correspond to the left and right) coordinates of the 3D model 11 points are labeled as: d0 11
The x-axis coordinate of 2 points is noted as: d0 2
The x-axis coordinate of 25 points is noted as: d0 25
The x-axis coordinate of point 21 is noted as: d0 21
When scale is 1, the x-axis coordinate of the 3D model 11 point is recorded as: d1 11
The x-axis coordinate of 2 points is noted as: d1 2
The x-axis coordinate of 25 points is noted as: d1 25
The x-axis coordinate of point 21 is noted as: d1 21
When scale is-1, the x-axis coordinate of the 3D model 11 point is noted as: d-1 11
The x-axis coordinate of 2 points is noted as: d-1 2
The x-axis coordinate of 25 points is noted as: d-1 25
The x-axis coordinate of point 21 is noted as: d-1 21
The values respectively represent coordinates corresponding to the feature points when the eyes are at a middle value, a maximum value and a minimum value, namely values of corresponding parts of model skeletons under the condition that the model is not damaged or deformed, three reference data with different sizes are taken to serve as reference data when the skeletons of the 3D model are adjusted, the coordinates of the eyes under the conditions of different sizes can be obtained by receiving json return data, and the accuracy of the three groups of data is verified;
based on the obtained picture, according to the configuration file, determining the x-axis coordinate of 11 points of the face in the picture from the first coordinate set as follows: dk 11And the x-axis coordinate of the 3 points is recorded as: dk 3And the x-axis coordinate of 25 points of the human face is recorded as: dk 25And the x-axis coordinate of the 21 point is recorded as: dk 21
Calculating an adjustment reference of the eye:
d1 b=((d0 11-d0 2)/(d1 11-d1 n2)*(d1 25-d1 21);
d-1 b=((d0 11-d0 2)/(d-1 11-d-1 2))*(d-1 25-d-1 21);
obtaining an adjustment parameter according to the coordinate corresponding to the part to be adjusted and the adjustment reference of the part to be adjusted, specifically, calculating the adjustment parameter as follows:
calculating dk b=((d0 11-d0 2)/(dk 11-dk 2))*(dk 25-dk 21);
The adjustment parameters are as follows: (d)k b-(d0 25-d0 21))/((d1 b-d-1 b)/2);
By d1 b,d-1 b,dk bThe 3D model scale is 1, scale is-1 and the coordinate points of the face are all scaled to a model with scale 0, to obtain the proportions of the bones to be adjusted based on the intermediate values for different eye sizes.
And transmitting the calculated eye size value into an interface of an editor, so that the function of adjusting the size of eyes can be achieved, and the face pinching effect of the 3D model is realized.
In summary, according to the method and the terminal for implementing face-pinching of the 3D model provided by the present invention, feature points of an uploaded picture are calibrated, adjustment parameters are obtained according to coordinates corresponding to a part to be adjusted on the 3D model in the uploaded picture and an adjustment reference of the part to be adjusted, the part to be adjusted is adjusted on the 3D model according to the adjustment parameters, so as to implement face-pinching of the 3D model, the picture is processed and converted into adjustment parameters of bones and meshes of a face of the 3D model, and the bones and the meshes of the 3D model can be adjusted by using the generated adjustment parameters, so as to form a three-dimensional face model, thereby implementing adaptation to different face shapes, and obtaining higher face-pinching similarity.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (8)

1. A method for realizing face pinching of a 3D model is characterized by comprising the following steps:
s1, acquiring the uploaded picture, and determining coordinates corresponding to a predetermined number of feature points on the 3D model of the face to be pinched in the picture to obtain a first coordinate set;
s2, determining coordinates corresponding to the part to be adjusted on the 3D model of the face to be pinched in the first coordinate set and an adjustment reference of the part to be adjusted;
s3, obtaining an adjusting parameter according to the coordinate corresponding to the part to be adjusted and the adjusting reference of the part to be adjusted, and adjusting the part to be adjusted on the 3D model of the face to be pinched according to the adjusting parameter;
the step S2 includes:
s21, adding a configuration file corresponding to the 3D model of the face to be pinched, wherein the configuration file comprises an adjustable part type, feature points corresponding to the part type, coordinates of the feature points corresponding to the part type in different proportions and reference feature points of the 3D model;
s22, determining the coordinates corresponding to the part to be adjusted in the first coordinate set according to the feature points corresponding to the part types;
and S23, determining an adjusting standard of the part to be adjusted on the 3D model of the face to be pinched according to the feature points corresponding to the part types, the coordinates of the feature points corresponding to the part types under different proportions and the reference feature points of the 3D model.
2. A method of implementing a 3D model face pinching according to claim 1,
the step S1 is preceded by the steps of:
s01, receiving the uploaded pictures, judging whether the pictures have human faces or not, if so, executing a step S02, otherwise, returning to the step S01;
and S02, judging whether the face in the picture is within a preset range from the camera, if not, returning to the step S01, and if so, executing the step S1.
3. A method of implementing a 3D model face pinching according to claim 1,
the coordinates of the feature points corresponding to the part types at different proportions in step S21 include:
and coordinates of feature points respectively corresponding to the maximum size, the minimum size and the intermediate value of the maximum size and the minimum size of the part corresponding to the part type.
4. A method of implementing a 3D model face pinching according to claim 3,
the step S23 includes:
the adjusting reference is as follows:
d1 b=((d0 m-d0 n)/(d1 m-d1 n))*(d1 i-d1 j);
d-1 b=((d0 m-d0 n)/(d-1 m-d-1 n))*(d-1 i-d-1 j);
in step S3, obtaining an adjustment parameter according to the coordinate corresponding to the portion to be adjusted and the adjustment reference of the portion to be adjusted includes:
calculating dk b=((d0 m-d0 n)/(dk m-dk n))*(dk i-dk j);
The adjustment parameters are as follows: (d)k b-(d0 i-d0 j))/((d1 b-d-1 b)/2);
Wherein d isy xAnd x represents the coordinates corresponding to the feature points, x represents different types of feature points, x is m, n represents the reference feature point of the 3D model, x is i, j represents the feature point corresponding to the part to be adjusted, y is 1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is maximum, y is-1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is minimum, y is 0 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is a middle value of maximum and minimum, and y is k represents the coordinates corresponding to the feature point in the picture.
5. A terminal for implementing a 3D model face-pinching, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps when executing the computer program:
s1, acquiring the uploaded picture, and determining coordinates corresponding to a predetermined number of feature points on the 3D model of the face to be pinched in the picture to obtain a first coordinate set;
s2, determining coordinates corresponding to the part to be adjusted on the 3D model of the face to be pinched in the first coordinate set and an adjustment reference of the part to be adjusted;
s3, obtaining an adjusting parameter according to the coordinate corresponding to the part to be adjusted and the adjusting reference of the part to be adjusted, and adjusting the part to be adjusted on the 3D model of the face to be pinched according to the adjusting parameter;
the step S2 includes:
s21, adding a configuration file corresponding to the 3D model of the face to be pinched, wherein the configuration file comprises an adjustable part type, feature points corresponding to the part type, coordinates of the feature points corresponding to the part type in different proportions and reference feature points of the 3D model;
s22, determining the coordinates corresponding to the part to be adjusted in the first coordinate set according to the feature points corresponding to the part types;
and S23, determining an adjusting standard of the part to be adjusted on the 3D model of the face to be pinched according to the feature points corresponding to the part types, the coordinates of the feature points corresponding to the part types under different proportions and the reference feature points of the 3D model.
6. A terminal for implementing 3D model face pinching according to claim 5,
the step S1 is preceded by the steps of:
s01, receiving the uploaded pictures, judging whether the pictures have human faces or not, if so, executing a step S02, otherwise, returning to the step S01;
and S02, judging whether the face in the picture is within a preset range from the camera, if not, returning to the step S01, and if so, executing the step S1.
7. A terminal for implementing 3D model face pinching according to claim 5,
the coordinates of the feature points corresponding to the part types at different proportions in step S21 include:
and coordinates of feature points respectively corresponding to the maximum size, the minimum size and the intermediate value of the maximum size and the minimum size of the part corresponding to the part type.
8. A terminal for implementing 3D model face pinching according to claim 7,
the step S23 includes:
the reference value of the adjustment is as follows:
d1 b=((d0 m-d0 n)/(d1 m-d1 n))*(d1 i-d1 j);
d-1 b=((d0 m-d0 n)/(d-1 m-d-1 n))*(d-1 i-d-1 j);
in step S3, obtaining an adjustment parameter according to the coordinate corresponding to the portion to be adjusted and the adjustment reference of the portion to be adjusted includes:
calculating dk b=((d0 m-d0 n)/(dk m-dk n))*(dk i-dk j);
The adjustment parameters are as follows: (d)k b-(d0 i-d0 j))/((d1 b-d-1 b)/2);
Wherein d isy xAnd x represents the coordinates corresponding to the feature points, x represents different types of feature points, x is m, n represents the reference feature point of the 3D model, x is i, j represents the feature point corresponding to the part to be adjusted, y is 1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is maximum, y is-1 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is minimum, y is 0 represents the coordinates corresponding to the feature point when the size of the part to be adjusted in the 3D model is a middle value of maximum and minimum, and y is k represents the coordinates corresponding to the feature point in the picture.
CN201810461808.8A 2018-05-15 2018-05-15 Method and terminal for realizing face pinching of 3D model Active CN108765551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810461808.8A CN108765551B (en) 2018-05-15 2018-05-15 Method and terminal for realizing face pinching of 3D model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810461808.8A CN108765551B (en) 2018-05-15 2018-05-15 Method and terminal for realizing face pinching of 3D model

Publications (2)

Publication Number Publication Date
CN108765551A CN108765551A (en) 2018-11-06
CN108765551B true CN108765551B (en) 2022-02-01

Family

ID=64006852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810461808.8A Active CN108765551B (en) 2018-05-15 2018-05-15 Method and terminal for realizing face pinching of 3D model

Country Status (1)

Country Link
CN (1) CN108765551B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598749B (en) * 2018-11-30 2023-03-10 腾讯科技(深圳)有限公司 Parameter configuration method, device, equipment and medium for three-dimensional face model
CN109671016B (en) 2018-12-25 2019-12-17 网易(杭州)网络有限公司 face model generation method and device, storage medium and terminal
CN110111417B (en) * 2019-05-15 2021-04-27 浙江商汤科技开发有限公司 Method, device and equipment for generating three-dimensional local human body model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968892A (en) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 Method for automatically adjusting three-dimensional face model according to one face picture
CN103606190A (en) * 2013-12-06 2014-02-26 上海明穆电子科技有限公司 Method for automatically converting single face front photo into three-dimensional (3D) face model
CN104268932A (en) * 2014-09-12 2015-01-07 上海明穆电子科技有限公司 3D facial form automatic changing method and system
CN105118022A (en) * 2015-08-27 2015-12-02 厦门唯尔酷信息技术有限公司 2-dimensional to 3-dimensional face generation and deformation method and system thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632129A (en) * 2012-08-28 2014-03-12 腾讯科技(深圳)有限公司 Facial feature point positioning method and device
CN104751408B (en) * 2015-03-26 2018-01-19 广东欧珀移动通信有限公司 The method of adjustment and device of face head portrait
CN104966316B (en) * 2015-05-22 2019-03-15 腾讯科技(深圳)有限公司 A kind of 3D facial reconstruction method, device and server
CN106503686A (en) * 2016-10-28 2017-03-15 广州炒米信息科技有限公司 The method and system of retrieval facial image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968892A (en) * 2009-07-28 2011-02-09 上海冰动信息技术有限公司 Method for automatically adjusting three-dimensional face model according to one face picture
CN103606190A (en) * 2013-12-06 2014-02-26 上海明穆电子科技有限公司 Method for automatically converting single face front photo into three-dimensional (3D) face model
CN104268932A (en) * 2014-09-12 2015-01-07 上海明穆电子科技有限公司 3D facial form automatic changing method and system
CN105118022A (en) * 2015-08-27 2015-12-02 厦门唯尔酷信息技术有限公司 2-dimensional to 3-dimensional face generation and deformation method and system thereof

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种交互式脸部网络模型调整算法;李梦东 等;《北方交通大学学报》;20020815;第26卷(第4期);第97-100页 *
一种人脸网格模型的特征区域细节调整方法;雷林华 等;《计算机工程与应用》;20081230(第16期);第194-196页 *
人脸特征点提取方法综述;李月龙 等;《计算机学报》;20160715;第39卷(第7期);第1356-1374页 *

Also Published As

Publication number Publication date
CN108765551A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
JP7136875B2 (en) Eye Pose Identification Using Eye Features
EP3885967A1 (en) Object key point positioning method and apparatus, image processing method and apparatus, and storage medium
CN108305312B (en) Method and device for generating 3D virtual image
US10198845B1 (en) Methods and systems for animating facial expressions
CN108765551B (en) Method and terminal for realizing face pinching of 3D model
JP6864449B2 (en) Methods and devices for adjusting the brightness of the image
CN106920274B (en) Face modeling method for rapidly converting 2D key points of mobile terminal into 3D fusion deformation
KR102279813B1 (en) Method and device for image transformation
WO2021012596A1 (en) Image adjustment method, device, storage medium, and apparatus
KR102277048B1 (en) Preview photo blurring method and device and storage medium
CN111754415B (en) Face image processing method and device, image equipment and storage medium
CN103210421B (en) Article detection device and object detecting method
CN105474263B (en) System and method for generating three-dimensional face model
CN108053283B (en) Garment customization method based on 3D modeling
WO2019137131A1 (en) Image processing method, apparatus, storage medium, and electronic device
CN107452049B (en) Three-dimensional head modeling method and device
TW201635198A (en) Positioning feature points of human face edge
CN109272566A (en) Movement expression edit methods, device, equipment, system and the medium of virtual role
JP5949331B2 (en) Image generating apparatus, image generating method, and program
CN104715447A (en) Image synthesis method and device
WO2023109753A1 (en) Animation generation method and apparatus for virtual character, and storage medium and terminal
CN110148191B (en) Video virtual expression generation method and device and computer readable storage medium
WO2022143354A1 (en) Face generation method and apparatus for virtual object, and device and readable storage medium
CN115546365A (en) Virtual human driving method and system
CN113902851A (en) Face three-dimensional reconstruction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant