CN112488912A - Image processing method and device and electronic equipment - Google Patents
Image processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN112488912A CN112488912A CN202011363526.8A CN202011363526A CN112488912A CN 112488912 A CN112488912 A CN 112488912A CN 202011363526 A CN202011363526 A CN 202011363526A CN 112488912 A CN112488912 A CN 112488912A
- Authority
- CN
- China
- Prior art keywords
- user
- face
- maxillofacial
- parameters
- mouth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000012545 processing Methods 0.000 claims abstract description 53
- 238000000034 method Methods 0.000 claims abstract description 47
- 210000004373 mandible Anatomy 0.000 claims abstract description 38
- 230000001815 facial effect Effects 0.000 claims description 35
- 210000001847 jaw Anatomy 0.000 claims description 30
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 abstract description 7
- 238000005516 engineering process Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 12
- 230000003796 beauty Effects 0.000 description 5
- 238000012937 correction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 210000000697 sensory organ Anatomy 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 210000004872 soft tissue Anatomy 0.000 description 2
- 206010058314 Dysplasia Diseases 0.000 description 1
- 241000544076 Whipplea modesta Species 0.000 description 1
- 239000011324 bead Substances 0.000 description 1
- 210000003323 beak Anatomy 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000002050 maxilla Anatomy 0.000 description 1
- 230000036630 mental development Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 210000001114 tooth apex Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 210000000216 zygoma Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
Abstract
The application discloses an image processing method, an image processing device and electronic equipment, belongs to the technical field of communication, and can solve the problem that in the processing process of the related technology, effective beautification can not be performed on face photos of a user at other angles except for a front face photo, so that the user can not obtain the face photos at any angle. The method comprises the following steps: the electronic equipment acquires feature point information of mandible part feature points in face feature points of a user; the electronic equipment calculates the maxillofacial parameters of the user according to the characteristic point information of the lower jaw part characteristic points; the electronic equipment determines the face shape of the user according to the maxillofacial parameters; the electronic equipment adjusts the maxillofacial parameters based on preset maxillofacial parameters matched with the face shape of the user.
Description
Technical Field
The application belongs to the technical communication field, and particularly relates to an image processing method and device and electronic equipment.
Background
With the development of electronic equipment technology, the photographing function of the electronic equipment is more and more abundant, wherein the beauty function in the photographing function is favored by more and more users. In the related technology, after the electronic device collects the face photo of the front face of the user through a photographing technology, the information of the five sense organs and the face information of the user in the photo can be compared with a standard face shape prestored in the electronic device, and then the angle or proportion of the information of the five sense organs, the face shape and the like can be adjusted.
However, the above processing procedure cannot effectively beautify the face photograph of the user at an angle other than the face photograph of the front side (the face photograph of the side or the face photograph close to the side), and therefore, the user cannot obtain a beauty photograph at an arbitrary angle.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, and the problem that in a processing process of the related art, effective beautification cannot be performed on face photos of a user at other angles except for a front face photo, so that the user cannot obtain a beauty photo at any angle is solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including: the electronic equipment acquires feature point information of mandible part feature points in face feature points of a user; the electronic equipment calculates the maxillofacial parameters of the user according to the characteristic point information of the lower jaw part characteristic points; the electronic equipment determines the face shape of the user according to the maxillofacial parameters; the electronic equipment adjusts the maxillofacial parameters based on preset maxillofacial parameters matched with the face shape of the user.
In a second aspect, an embodiment of the present application provides an image processing apparatus, where the apparatus includes an obtaining module, a calculating module, a determining module, and an adjusting module; the acquisition module is used for acquiring feature point information of mandible part feature points in the face feature points of the user; the calculating module is used for calculating the maxillofacial parameters of the user according to the characteristic point information of the characteristic points of the lower jaw part; the determining module is used for determining the face shape of the user according to the maxillofacial parameters; the adjusting module is used for adjusting the maxillofacial parameters based on preset maxillofacial parameters matched with the face shape of the user.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment, after obtaining feature point information of a lower jaw part feature point in a user face feature point, an electronic device may calculate a maxillofacial parameter of the user according to the feature point information of the lower jaw part feature point, determine a face shape of the user according to the maxillofacial parameter, and finally determine a preset maxillofacial parameter matching the face shape of the user, thereby adjusting the maxillofacial parameter. In this way, by acquiring the characteristics of the jaw part of the user, the parameters of the jaw part of the user can be adjusted in a targeted manner, so that accurate and effective correction parameters are provided for the electronic device when the electronic device processes the face photos of the user at angles other than the front (for example, the face photos of the user at the side or other angles), and the electronic device can effectively process the face photos of the user at angles other than the front.
Drawings
Fig. 1 is a schematic view of an E line applied in an image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic view of a face applied by an image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic view of a face applied by an image processing method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The terms appearing in the embodiments of the present application are explained below as follows:
1.E line
Line E, also known as the esthetic standard line, refers to the line drawn through the soft tissue, the premolar point and the nasal tip. In a well-balanced profile, the lower lip is on line E, the upper lip is on the line or 0.5-1mm later.
As shown in fig. 1, E-line 20 is formed by the line connecting the nose tip point 21 and the soft tissue chin point 22.
2. Lower jaw part
The lower jaw part mainly comprises a lower jaw, a lower jaw edge and a lower jaw angle. The chin consists of the connecting parts of the horizontal branches of the mandibles on both sides and is positioned below the midline of the face. The chin-neck angle formed by the lower edge of the chin and the upper part of the neck is the boundary line between the face part and the neck part. The chin, cheekbones on two sides and the mandibular angle region form the basic outline of the face, and form the basic characteristics of the face shape with the tissue organs such as the nasal lips in the face.
Wherein, the shape, size and position of the chin have great influence on the facial appearance. Dysplasia of the chin bone may result in "beak", "pointy face" or "bird face". Excessive mental development can cause "long face" or "horse face". Deflection of the chin can cause asymmetry across the face.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image processing method provided by the embodiment of the application can be applied to scenes of people images shot by users.
Aiming at the scene that the user shoots the portrait, the user is supposed to shoot the side face of the user by using the electronic equipment, after the shooting is finished, an unprocessed photo 1 is obtained, and at the moment, the electronic equipment processes the photo 1 by using the beautifying function. Since in the related art, the beauty function is mainly adjusted for the face shape, five sense organs, and skin attributes (e.g., skin color, skin fineness) of the front face of the user, the electronic device cannot make effective adjustments, for example, to the facial skeleton, side face shape, etc. of the user, for a photograph 1, which is only a side face and does not show complete front face information. Generally, after the electronic device adjusts the photo 1 using the beauty function, the photo 2 actually presented is in a state of being substantially unadjusted with respect to the photo 1. Obviously, this substantially ineffective adjustment does not meet the cosmetic needs of the user.
In this embodiment, after obtaining the picture 1, the electronic device may calculate the maxillofacial parameters of the user according to the feature point information of the feature point of the mandible part in the picture 1, determine the facial form of the user according to the maxillofacial parameters, finally determine the preset maxillofacial parameters matching the facial form of the user, further adjust the maxillofacial parameters, and finally, adjust the picture 2 to effectively adjust the maxillofacial part of the user with respect to the picture 1. In this way, by acquiring the characteristics of the jaw part of the user in the picture 2, the parameters of the jaw part of the user can be adjusted in a targeted manner, so that accurate and effective correction parameters are provided for the electronic device when the electronic device processes the face pictures of the user at angles other than the front (for example, the face pictures of the user at the side or other angles), and the electronic device can effectively process the face pictures of the user at angles other than the front.
The present embodiment provides an image processing method, as shown in fig. 2, the image processing method includes the following steps 301 to 304:
step 301: the image processing apparatus acquires feature point information of a mandible part feature point among the face feature points of the user.
In the embodiment of the present application, the feature points of the face of the user may include all the feature points of the respective orientations of the face of the user. In one example, the facial feature points may include facial feature points and facial feature points, and may further include lateral facial feature points and facial feature points.
In the embodiment of the present application, the image processing apparatus may acquire the facial feature points of the user by a full-face scanning technique. For example,
for example, the image processing apparatus may scan the whole face of the user through a 3D face scanning technology in the electronic device, and acquire three-dimensional data information of the whole face image, which may include a frontal face image and face images at other angles. Then, a 3D model of the face of the user is generated through the combination of the data information of the face images. The electronic device may set key points on the 3D model of the face of the user as facial feature points, each facial feature point including feature point information. When the electronic equipment searches for the key points on the face 3D model of the user, the key points can be searched for in the face 3D model of the user according to a key point distribution model on a preset 3D model prestored in the electronic equipment.
In the process of scanning the whole face of a user by a 3D face scanning technology in the electronic equipment and acquiring the three-dimensional data information of the whole face image, the three-dimensional data of the face can be acquired by an infrared lens, a floodlight sensor group and a dot matrix projector group. The infrared lens captures facial image information of a person to generate three-dimensional data, and the floodlight sensor group is matched with the infrared lens to adapt to various light environments, so that the floodlight sensor group is not influenced by the daytime and the night. After the facial image information is collected, a dot matrix projector projects a plurality of light spots on the face of a person to perform 3D modeling.
In the embodiment of the present application, the above-mentioned mandible component feature point may be a mandible component feature point selected by the image processing apparatus from the face feature points.
For example, a plurality of sets of image relationship models between the mandible part and the face may be pre-stored in the electronic device. In one example, the relationship between the lower jaw part and the whole face is different for different face types (for example, the lower jaw part occupies the area of the whole face, and the relationship between the lower jaw part and the whole face), so that the relationship between the plurality of sets of lower jaw parts and the whole face can be prestored according to different face types in the electronic device, so that the image processing device can screen the lower jaw part characteristic points from the facial characteristic points of the user and perform further processing.
In one example, the mandible part characteristic points may include a highest point of a nose tip, a highest point of a lip (i.e. a position where a lip is located), and a highest point of a chin.
In the embodiment of the present application, the feature point information may be data information of a feature point, for example, coordinate information of the feature point.
Step 302: the image processing device calculates the maxillofacial parameters of the user according to the feature point information of the feature points of the mandible part.
In this embodiment, the maxillofacial parameter may be parameter information corresponding to a maxillofacial feature point.
In one example, the jaw face parameter may be a shape parameter, an angle parameter, an area parameter of a side face of the user, and a proportion parameter compared with a face feature point of the user, for example, an area ratio of the jaw face compared with the whole face of the user.
Step 303: the image processing device determines the face shape of the user according to the maxillofacial parameters.
In the embodiment of the present application, the face of the user may be a face model that is selected by the image processing apparatus from among face models pre-stored in the electronic device, and the face model that is closest to the face of the user is selected as the face of the user. The face model pre-stored in the electronic device may be the most common face model in the existing portrait.
In one example, the facial model may be classified into a straight type, a concave type and a convex type.
The first method comprises the following steps: straight face type. The straight face type generally meets the aesthetic E line, and the nose tip, the lips and the chin are basically on the same line; is also beautiful and popular among people, and the face looks modesty.
And the second method comprises the following steps: and (4) a convex surface type. The partial image of the mandible actually presented by the above-mentioned convex face is that the lips are protruded outward, generally because of what is commonly known as "bucktooth". The person is a male protrusion; some people are bouts of things up and down. People of the convex type are often perceived by everyone as being not particularly stylish in nature.
And the third is that: a concave type. The jaw partial image actually appearing in the above concave face is a chin projection. Generally, the common people are commonly called as the "shoehorn face", and the typical face is the "ground cover day". In the process of developing the upper jaw and the lower jaw, the upper jaw of a human body is normally developed, and the lower jaw of the human body is protruded; some have normal chin development but insufficient maxilla development; there is also a human having a lack of maxillary development accompanied by a prominent chin, all three of which make the human look concave in the middle of the face. It is generally perceived as being of a higher severity.
Step 304: the image processing device adjusts the maxillofacial parameters based on preset maxillofacial parameters matched with the face shape of the user.
Optionally, the preset maxillofacial parameters include parameters corresponding to feature point information of the feature points of the mandible part of the preset maxillofacial region.
For example, the preset maxillofacial parameter may be pre-stored in the electronic device, or may be set by a user in a self-defined manner, which is not limited in the embodiment of the present application.
Illustratively, the preset maxillofacial parameters may be those of the facial form most prevalent in the existing portrait.
In one example, the maxillofacial parameters of the three facial models may be prestored in the electronic device as the preset maxillofacial parameters, that is, the straight-face type maxillofacial parameters, the convex-face type maxillofacial parameters, and the concave-face type maxillofacial parameters may be prestored in the electronic device as the preset maxillofacial parameters.
In the embodiment of the present application, the image processing apparatus may adjust a shape parameter, an angle parameter, an area parameter, and a scale parameter compared with a facial feature point of the user among jaw-facial parameters of the user.
It should be noted that, when adjusting the jaw parameters of the user, the image processing apparatus may perform 3D stereo adjustment on all the feature points of the mandible part of the user, rather than only making adjustment on a certain local area or line. For example, when the face of the user is convex, the image display device may make an inward adjustment of the maximum angle to the projected vertex, and gradually reduce the adjustment angle from the most projected portion to both sides of the face of the user. Therefore, the final effect of adjustment can be enhanced, and the adjusted image is more real and three-dimensional in sense.
In one example, when the face shape of the user is a straight face shape, the image processing apparatus does not need to correct the jaw-face parameters of the user and keeps the jaw-face parameters as they are.
In one example, when the face of the user is a convex face, the image processing apparatus may adjust the feature point information of the mouth part of the user by using an algorithm, so that the mouth of the user is corrected inward, the mouth is closed inward, and the condition of the convex mouth is corrected.
In one example, when the face of the user is a concave face, the image processing apparatus may algorithmically adjust the feature point information of the mouth part of the user so that the mouth of the user is corrected outward, and the concave mouth is raised until the tip of the nose and the tip of the chin are located on the same horizontal line.
Example 1: suppose that a user with a convex face needs to use an electronic device to adjust his own side-face photograph he takes. The electronic equipment firstly scans the face of the user through 3D scanning software, then obtains the coordinate information of the lower jaw part feature points (namely the feature point information) in the feature points of the face of the user, and can calculate the maxillofacial parameters of the maxillofacial of the user according to the coordinate information of the lower jaw part feature points, wherein the maxillofacial parameters comprise shape parameters, angle parameters, area parameters and proportion parameters compared with the face feature points of the user. Then, the electronic device can determine that the face of the user is a convex face according to the shape parameter, the angle parameter, the area parameter and the proportion parameter compared with the facial feature point of the user, finally, the electronic device can adjust the jaw face parameter in the side face picture by using the preset jaw face parameter matched with the convex face shape, the angle parameter of the protruding part in the side face picture of the user is properly adjusted to be smaller according to the preset jaw face parameter, and finally the user with the convex face is presented as a standard face.
In the image processing method provided in the embodiment of the present application, after obtaining feature point information of a lower jaw part feature point in a user face feature point, an image processing apparatus may calculate a maxillofacial parameter of the user according to the feature point information of the lower jaw part feature point, determine a face shape of the user according to the maxillofacial parameter, finally determine a preset maxillofacial parameter matching the face shape of the user, and further adjust the maxillofacial parameter. In this way, by acquiring the characteristics of the jaw part of the user, the parameters of the jaw part of the user can be adjusted in a targeted manner, so that accurate and effective correction parameters are provided for the electronic device when the electronic device processes the face photos of the user at angles other than the front (for example, the face photos of the user at the side or other angles), and the electronic device can effectively process the face photos of the user at angles other than the front.
Optionally, in this embodiment of the application, in the step 302, the image processing method provided in this embodiment of the application may include the following steps a1 and a 2:
step A1: the image processing apparatus recognizes the E-line feature points from the mandible part feature points.
Step A2: the image processing device calculates the maxillofacial parameters of the user based on the feature point information of the E-line feature point.
For example, the image processing device may identify the E-line feature points according to a distribution of the mandible part feature points and a preset relationship between the E-line feature points and the mandible feature points. The preset relationship between the E-line characteristic points and the mandible characteristic points may be a coordinate relative position relationship between the E-line characteristic points and the mandible characteristic points.
In an example, the E-line feature point may be the highest point of the user's nose, mouth, and chin in the mandible part feature point, or may be the midpoint of the user's nose, mouth, and chin in the mandible part feature point, or may be the most prominent feature point of the user's nose, mouth, and chin in the mandible feature point.
Example 2: with reference to example 1, when the electronic device calculates the maxillofacial parameters of the maxillofacial region of the user according to the coordinate information of the mandible portion feature points, the feature point information of the E-line feature points may be filtered out according to the coordinate information of the mandible portion feature points, and since the user takes a side face photograph of the user, after recognizing the highest points of the nose, mouth, and chin of the user in the side face photograph, the electronic device may calculate the maxillofacial parameters of the user according to the coordinate information corresponding to the highest points.
Therefore, the image processing device can calculate the jaw and face parameters of the user only by acquiring the E-line characteristic points in the jaw part characteristic points, so that the facial analysis and the facial image adjustment are performed, the calculation amount is reduced, the resources of electronic equipment required to be consumed by calculation are saved, and the power consumption can be saved.
Optionally, in this embodiment of the present application, the feature points of the E-line include a nasal vertex, a mouth vertex, and a chin vertex of the face of the user. On this basis, before the step a1, the image processing method according to the embodiment of the present application may further include the following step B:
and B: the image processing apparatus determines the mouth state of the user based on feature point information of the feature point of the face of the user.
For example, feature point information of facial feature points corresponding to expressions and states corresponding to the five sense organs of the portrait can be prestored in the electronic device.
In an example, feature point information of feature points corresponding to the mouth contour of the face image in two states may be pre-stored in the electronic device: a first, mouth open; second, the mouth is closed. The feature point information of the feature point corresponding to the mouth contour in the mouth opening state is completely different from the feature point information of the feature point corresponding to the mouth contour in the mouth opening state.
In addition to the above step B, the above mouth vertex can be classified into the following two cases according to the mouth state.
In the first case:
for example, when the mouth state is an exposed state, the mouth apex is a tooth apex at which a tooth portion is exposed.
It can be understood that when the mouth is opened, the feature point information of the feature points corresponding to the mouth contour acquired by the image display device is successfully matched with the pre-stored first state, the mouth state is determined to be the tooth exposing state, and the mouth image is displayed to expose the teeth. In this state, the electronic device may acquire feature point information corresponding to the feature point of the exposed tooth portion, and after the analysis, use the highest point of the feature point of the exposed tooth portion as the vertex of the mouth.
It should be noted that the highest point may be the most prominent feature point of the mouth of the user.
Further, after the highest point of the tooth is obtained, when the maxillofacial parameters are adjusted, the image processing device may adjust the maxillofacial parameters corresponding to the feature point information of the feature point of the exposed tooth portion. That is, the image processing apparatus can adjust the state of the exposed portion of the tooth in the image, for example, the bucktooth to the tooth in the normal state by adjusting the maxillofacial parameter corresponding to the tooth portion.
Example 3: assuming that the side face picture taken by the user is a side face picture with exposed teeth, in combination with the above examples 1 and 2, when the electronic device adjusts the maxillofacial parameters in the side face picture by using the preset maxillofacial parameters matched with the convex face shape, the highest point of the teeth in the side face picture of the user can be taken as the highest point of the mouth, and is connected with the highest point of the nose and the highest point of the chin to form a line, and the angle parameter of the protruded part of the mouth is adjusted to be small appropriately by referring to the preset maxillofacial parameters, so that the user with the convex face finally presents a standard face shape.
In the second case:
for example, when the mouth state is a non-exposed state, the mouth apex is a highest lip point.
It can be understood that, when the mouth is closed, the feature point information of the feature point corresponding to the mouth contour acquired by the image display device is successfully matched with the second state, the mouth state is determined to be the non-exposed state, and the non-exposed teeth are displayed on the mouth image. In this state, the electronic device may analyze feature point information corresponding to the feature points of the lip contour to obtain the feature vertex of the mouth. Generally, in the case of a closed mouth, the apex of the mouth is the labial bead point.
Therefore, the electronic equipment adjusts the strategy for acquiring the E-line characteristic points by determining the mouth state of the user, so that the E-line characteristic points matched with the mouth state are used, and the mandible part of the user can be effectively, pertinently and accurately adjusted.
Optionally, in this embodiment of the present application, in the step 302, the image processing method provided in this embodiment of the present application may include the following step C:
and C: the image processing device obtains the jaw face parameters of the user by calculating an included angle between a first connecting line of a nose vertex and a mouth vertex of the face of the user and a second connecting line of the mouth vertex and a chin vertex of the face of the user.
For example, the above-mentioned nasal apex, mouth apex and chin fixed point can refer to the foregoing description, and are not described herein again.
Illustratively, the included angle between the first connecting line and the second connecting line is calculated to obtain the jaw-face parameters of the user, and the facial form of the user, such as a straight facial form, or a concave facial form, or a convex facial form, can be determined according to the jaw-face parameters.
The included angle is an included angle between the first connecting line and the second connecting line facing in the same direction, in one example, the included angle may be an included angle facing toward the outer side of the face, and in another example, the included angle may be an included angle facing toward the inner side of the face.
In this way, the image processing apparatus can acquire the maxillofacial parameters and further determine the face shape of the user only by calculating the included angle between the first connecting line of the nose vertex and the mouth vertex of the face of the user and the second connecting line of the mouth vertex and the chin vertex of the face of the user, so that the face shape of the user can be accurately determined by the simple calculation method, and further the maxillofacial part of the user can be accurately adjusted according to the face shape of the user.
Optionally, in this embodiment of the present application, after obtaining the maxillofacial parameters of the user in step C, the image processing method provided in this embodiment of the present application may include steps D1 and D2 as follows:
step D1: and the image processing device keeps the current maxillofacial parameters of the user under the condition that the angle numerical value of the included angle belongs to a first preset angle numerical value range.
Step D2: and the image processing device adjusts the angle value to be within the first preset angle value range under the condition that the angle value of the included angle exceeds the first preset angle value range.
For example, the first preset angle numerical range may be preset by the electronic device or may be set by a user in a self-defined manner, which is not limited in the embodiment of the present application.
It can be understood that, in general, the straight-face type face is the most ideal face, and the included angle value that the straight-face type face corresponds is 180 °, however, in practical application, because the difference of everyone's five sense organs characteristics, when the jaw face parameter is not 180 °, if directly adjust the included angle in the jaw face parameter to 180 °, the adjusted user face is not the best visual effect, therefore, can set up a first preset angle value range for the included angle in the jaw face parameter, in this first preset angle value range, defaults to the ideal face promptly, need not to adjust user's jaw face parameter.
Example 4: assuming that the first preset angle is in the range of 175 ° to 185 °, as shown in fig. 3, in the side face photograph taken by the user, the vertex of the nose and the vertex of the mouth are connected to generate a first line 31, and the vertex of the mouth and the vertex of the chin are connected to generate a second line 32, and the angle 33 between the first line 31 and the second line 32 is 200 °, it can be determined that the face of the user is a convex face, and the electronic device can adjust the face until the angle 33 between the first line 31 and the second line 32 is in the range of 175 ° to 185 °.
Example 5: with reference to the above example 4, as shown in fig. 4, if the included angle 43 between the first line 31 and the second line 32 is 160 °, it can be determined that the face of the user is a concave face, and the electronic device can adjust the face to adjust the mouth to the outside until the included angle 33 between the first line 31 and the second line 32 is 175 ° to 185 °.
Therefore, the user face can be quickly judged by judging the relation between the included angle and the first preset angle numerical range, and the mouth is adjusted to be in the first preset angle numerical range, so that the user face can be quickly and accurately corrected.
Optionally, in this embodiment of the application, the maxillofacial parameter includes an angle parameter of a mandible and maxillofacial area of the user. On this basis, in the step 304, the image processing method provided by the embodiment of the present application may include the following step E:
step E: the image processing device adjusts the angle parameter of the lower jaw and the face based on the preset jaw and face parameter matched with the face shape of the user.
For example, the angle parameter of the mandibular-maxillofacial region may include an angle parameter of an E-line feature point, and the like. Typically, in a well-balanced profile, the lower lip is on line E, the upper lip is on line E or 0.5-1mm later. The angle parameter is 180 degrees after the characteristic points of the E line are connected.
In one example, when the face of the user is a convex face, the connection of the characteristic points of the E line can present obvious convex points, and then the image processing device can utilize an algorithm to contract the mandible and maxillofacial area of the user inwards by a small maxillofacial angle; when the face of the user is a concave face, obvious concave points can be presented after E line characteristic points are connected, and the image processing device can expand the mouth outwards by using an algorithm by using the algorithm so as to increase the angle of the maxillofacial region.
Therefore, the image processing device adjusts the angle parameters of the lower jaw and the jaw face, so that the face shape of the user is more in line with the existing aesthetic standard, and a simple, direct, quick and effective image processing method is provided for the user.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
Fig. 5 is a schematic diagram of a possible structure of an image processing apparatus for implementing the embodiment of the present application. As shown in fig. 5, the apparatus 600 includes an obtaining module 601, a calculating module 602, a determining module 603, and an adjusting module 604; the obtaining module 601 is configured to obtain feature point information of mandible part feature points in facial feature points of a user; the calculating module 602 is configured to calculate a maxillofacial parameter of the user according to the feature point information of the mandible part feature point acquired by the acquiring module 601; the determining module 603 is configured to determine the facial form of the user according to the maxillofacial parameter calculated by the calculating module 602; the adjusting module 604 is configured to adjust the maxillofacial parameters based on preset maxillofacial parameters matched with the face of the user determined by the determining module 603.
In the image processing apparatus provided in the embodiment of the present application, after obtaining feature point information of a lower jaw part feature point in a user face feature point, the image processing apparatus may calculate a maxillofacial parameter of the user according to the feature point information of the lower jaw part feature point, determine a face shape of the user according to the maxillofacial parameter, finally determine a preset maxillofacial parameter matched with the face shape of the user, and further adjust the maxillofacial parameter. In this way, by acquiring the characteristics of the jaw part of the user, the parameters of the jaw part of the user can be adjusted in a targeted manner, so that accurate and effective correction parameters are provided for the electronic device when the electronic device processes the face photos of the user at angles other than the front (for example, the face photos of the user at the side or other angles), and the electronic device can effectively process the face photos of the user at angles other than the front.
Optionally, in this embodiment of the present application, the apparatus 600 further includes: an identification module 605; the identifying module 605 is configured to identify E-line feature points from the mandible part feature points; the calculating module 602 is specifically configured to calculate the maxillofacial parameters of the user based on the feature point information of the E-line feature point identified by the identifying module 605.
Optionally, in this embodiment of the application, the E-line feature points include a nose vertex, a mouth vertex, and a chin vertex of the face of the user, and the determining module 603 is further configured to determine the mouth state of the user according to the feature point information of the face feature point of the user, acquired by the acquiring module; when the mouth state is an exposed state, the mouth vertex is a tooth highest point of an exposed tooth part; when the mouth state is a non-exposed state, the mouth apex is a highest lip point.
Optionally, in this embodiment of the present application, the apparatus 600 further includes: an acquisition module 606; the obtaining module 606 is configured to obtain the maxillofacial parameters of the user by calculating an included angle between a first connection line between a nose vertex and a mouth vertex of the user's face and a second connection line between the mouth vertex and a chin vertex of the user's face.
Optionally, in this embodiment of the application, the adjusting module 604 is further configured to maintain the current maxillofacial parameter of the user when the angle value of the included angle belongs to a first preset angle value range; the adjusting module is further configured to adjust the angle value to the first preset angle value range when the angle value of the included angle exceeds the first preset angle value range.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 and fig. 2, and is not described herein again to avoid repetition.
It should be noted that, as shown in fig. 5, modules that are necessarily included in the image processing apparatus 600 are indicated by solid line boxes, such as an acquisition module 601; modules that may or may not be included in the image processing apparatus 600 are illustrated with dashed boxes, such as the recognition module 605.
Optionally, as shown in fig. 6, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and executable on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the foregoing image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 100 includes, but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, a memory 109, and a processor 110. Wherein the user input unit 107 includes: touch panel 1071 and other input devices 1072, display unit 106 including display panel 1061, input unit 104 including image processor 1041 and microphone 1042, memory 109 may be used to store software programs (e.g., an operating system, application programs needed for at least one function), and various data.
Those skilled in the art will appreciate that the electronic device 100 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 110 is configured to obtain feature point information of mandible part feature points in the face feature points of the user; the processor 110 is further configured to calculate a maxillofacial parameter of the user according to the feature point information of the mandible part feature points; the processor 110 is further configured to determine a facial form of the user according to the maxillofacial parameter; the processor 110 is further configured to adjust the maxillofacial parameters based on preset maxillofacial parameters matched with the face shape of the user.
In the electronic device in the embodiment of the application, after the feature point information of the lower jaw part feature point in the user face feature point is acquired, the maxillofacial parameter of the user can be calculated according to the feature point information of the lower jaw part feature point, the facial form of the user is determined according to the maxillofacial parameter, and finally, the preset maxillofacial parameter matched with the facial form of the user is determined, so that the maxillofacial parameter is adjusted. In this way, by acquiring the characteristics of the jaw part of the user, the parameters of the jaw part of the user can be adjusted in a targeted manner, so that accurate and effective correction parameters are provided for the electronic device when the electronic device processes the face photos of the user at angles other than the front (for example, the face photos of the user at the side or other angles), and the electronic device can effectively process the face photos of the user at angles other than the front.
Optionally, the processor 110 is further configured to identify an E-line feature point from the mandible part feature points; the processor 110 is further configured to calculate the maxillofacial parameters of the user according to the feature point information of the E-line feature point.
Optionally, the processor 110 is specifically configured to obtain the maxillofacial parameter of the user by calculating an included angle between a first connection line between a nose vertex and a mouth vertex of the user's face and a second connection line between the mouth vertex and a chin vertex of the user's face.
Optionally, the processor 110 is further configured to maintain the current jaw-facial parameter of the user when the angle value of the included angle belongs to a first preset angle value range; the processor 110 is further configured to adjust the angle value to the first preset angle value range when the angle value of the included angle exceeds the first preset angle value range.
It should be understood that, in the embodiment of the present application, the input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An image processing method, characterized in that the method comprises,
acquiring feature point information of mandible part feature points in face feature points of a user;
calculating jaw and facial parameters of the user according to the characteristic point information of the lower jaw part characteristic points;
determining the face shape of the user according to the maxillofacial parameters;
adjusting the maxillofacial parameters based on preset maxillofacial parameters matched with the face shape of the user.
2. The method according to claim 1, wherein the calculating of the maxillofacial parameters of the user based on the feature point information of the feature points of the mandibular portion includes,
identifying E line characteristic points from the mandible part characteristic points;
and calculating the jaw and facial parameters of the user according to the characteristic point information of the characteristic point of the E line.
3. The method of claim 2, wherein the E-line feature points comprise a nose vertex, a mouth vertex, and a chin vertex of the user's face;
before the E-line characteristic point is identified, the method further includes:
determining the mouth state of the user according to the feature point information of the face feature point of the user;
the method further comprises the following steps:
when the mouth state is an exposed state, the mouth vertex is a tooth highest point of an exposed tooth part;
when the mouth state is a non-exposed state, the mouth apex is a highest point of the lip.
4. The method according to claim 1, wherein the calculating of the maxillofacial parameters of the user according to the feature point information of the feature points of the mandibular portion comprises:
and acquiring the jaw and face parameters of the user by calculating an included angle between a first connecting line of the nose vertex and the mouth vertex of the face of the user and a second connecting line of the mouth vertex and the chin vertex of the face of the user.
5. The method of claim 4, wherein after said obtaining of the user's maxillofacial parameters, the method further comprises:
under the condition that the angle value of the included angle belongs to a first preset angle value range, maintaining the current maxillofacial parameters of the user;
and under the condition that the angle value of the included angle exceeds the first preset angle value range, adjusting the angle value to the first preset angle value range.
6. An image processing device is characterized by comprising an acquisition module, a calculation module, a determination module and an adjustment module;
the acquisition module is used for acquiring feature point information of mandible part feature points in the face feature points of the user;
the calculating module is used for calculating the maxillofacial parameters of the user according to the feature point information of the feature points of the mandible part acquired by the acquiring module;
the determining module is used for determining the face shape of the user according to the maxillofacial parameters calculated by the calculating module;
the adjusting module is used for adjusting the maxillofacial parameters based on preset maxillofacial parameters matched with the face shape of the user determined by the determining module.
7. The apparatus of claim 6, further comprising: an identification module;
the identification module is used for identifying E line characteristic points from the mandible part characteristic points;
the calculating module is specifically configured to calculate the maxillofacial parameters of the user according to the feature point information of the E-line feature point identified by the identifying module.
8. The apparatus of claim 7, wherein the E-line feature points comprise a nose vertex, a mouth vertex, and a chin vertex of the user's face;
the determining module is further configured to determine a mouth state of the user according to the feature point information of the facial feature point of the user, which is acquired by the acquiring module;
when the mouth state is an exposed state, the mouth vertex is a tooth highest point of an exposed tooth part;
when the mouth state is a non-exposed state, the mouth apex is a highest point of the lip.
9. The apparatus of claim 6, further comprising: an acquisition module;
the obtaining module is used for obtaining the jaw face parameters of the user by calculating an included angle between a first connecting line of a nose vertex and a mouth vertex of the face of the user and a second connecting line of the mouth vertex and a chin vertex of the face of the user.
10. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011363526.8A CN112488912A (en) | 2020-11-27 | 2020-11-27 | Image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011363526.8A CN112488912A (en) | 2020-11-27 | 2020-11-27 | Image processing method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112488912A true CN112488912A (en) | 2021-03-12 |
Family
ID=74936718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011363526.8A Pending CN112488912A (en) | 2020-11-27 | 2020-11-27 | Image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112488912A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999929A (en) * | 2012-11-08 | 2013-03-27 | 大连理工大学 | Triangular gridding based human image face-lift processing method |
CN107833177A (en) * | 2017-10-31 | 2018-03-23 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN109145866A (en) * | 2018-09-07 | 2019-01-04 | 北京相貌空间科技有限公司 | Determine the method and device of side face tilt angle |
CN109389682A (en) * | 2017-08-09 | 2019-02-26 | 上海影子智能科技有限公司 | A kind of three-dimensional face model automatic adjusting method |
CN110060348A (en) * | 2019-04-26 | 2019-07-26 | 北京迈格威科技有限公司 | Facial image shaping methods and device |
CN111652795A (en) * | 2019-07-05 | 2020-09-11 | 广州虎牙科技有限公司 | Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium |
-
2020
- 2020-11-27 CN CN202011363526.8A patent/CN112488912A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102999929A (en) * | 2012-11-08 | 2013-03-27 | 大连理工大学 | Triangular gridding based human image face-lift processing method |
CN109389682A (en) * | 2017-08-09 | 2019-02-26 | 上海影子智能科技有限公司 | A kind of three-dimensional face model automatic adjusting method |
CN107833177A (en) * | 2017-10-31 | 2018-03-23 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN109145866A (en) * | 2018-09-07 | 2019-01-04 | 北京相貌空间科技有限公司 | Determine the method and device of side face tilt angle |
CN110060348A (en) * | 2019-04-26 | 2019-07-26 | 北京迈格威科技有限公司 | Facial image shaping methods and device |
CN111652795A (en) * | 2019-07-05 | 2020-09-11 | 广州虎牙科技有限公司 | Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11043011B2 (en) | Image processing method, apparatus, terminal, and storage medium for fusing images of two objects | |
JP6864449B2 (en) | Methods and devices for adjusting the brightness of the image | |
CN110390632B (en) | Image processing method and device based on dressing template, storage medium and terminal | |
JP6025690B2 (en) | Information processing apparatus and information processing method | |
JP4862955B1 (en) | Image processing apparatus, image processing method, and control program | |
CN108171789B (en) | Virtual image generation method and system | |
JP4760999B1 (en) | Image processing apparatus, image processing method, and control program | |
US20140254939A1 (en) | Apparatus and method for outputting information on facial expression | |
WO2018161289A1 (en) | Depth-based control method, depth-based control device and electronic device | |
TW202109359A (en) | Face image processing method, image equipment and storage medium | |
CN108682050B (en) | Three-dimensional model-based beautifying method and device | |
JP2015088096A (en) | Information processor and information processing method | |
JP2015088095A (en) | Information processor and information processing method | |
JP2021517676A (en) | Image processing methods and devices, image devices and storage media | |
CN108376421A (en) | A method of human face three-dimensional model is generated based on shape from shading method | |
JP2015088098A (en) | Information processor and information processing method | |
CN103945104A (en) | Information processing method and electronic equipment | |
US11120624B2 (en) | Three-dimensional head portrait generating method and electronic device | |
CN109255761B (en) | Image processing method and device and electronic equipment | |
JP5419777B2 (en) | Face image synthesizer | |
JP5419773B2 (en) | Face image synthesizer | |
CN112488912A (en) | Image processing method and device and electronic equipment | |
CN110910487B (en) | Construction method, construction device, electronic device, and computer-readable storage medium | |
CN114743252B (en) | Feature point screening method, device and storage medium for head model | |
CN114972014A (en) | Image processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |