CN108550185A - Beautifying faces treating method and apparatus - Google Patents

Beautifying faces treating method and apparatus Download PDF

Info

Publication number
CN108550185A
CN108550185A CN201810549499.XA CN201810549499A CN108550185A CN 108550185 A CN108550185 A CN 108550185A CN 201810549499 A CN201810549499 A CN 201810549499A CN 108550185 A CN108550185 A CN 108550185A
Authority
CN
China
Prior art keywords
key point
dimensional model
human face
facial image
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810549499.XA
Other languages
Chinese (zh)
Inventor
欧阳丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810549499.XA priority Critical patent/CN108550185A/en
Publication of CN108550185A publication Critical patent/CN108550185A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Abstract

The present invention proposes a kind of beautifying faces treating method and apparatus, wherein method includes:Obtain two-dimensional facial image, and the corresponding depth information of facial image, according to depth information and facial image, carry out three-dimensionalreconstruction, obtain human face three-dimensional model, by each first key point relative position in human face three-dimensional model, it is compared with reference to the second key point relative position in threedimensional model, determine position difference, wherein, there is correspondence with reference to the first key point in each second key point in threedimensional model and human face three-dimensional model, according to the position difference of each first key point of human face three-dimensional model, human face three-dimensional model is adjusted, improve the aesthetics at each position in human face three-dimensional model, the micro-shaping effect of face is realized in the three-dimensional model.

Description

Beautifying faces treating method and apparatus
Technical field
The present invention relates to technical field of image processing more particularly to a kind of beautifying faces treating method and apparatus.
Background technology
With the development of mobile terminal and image processing techniques, user using mobile terminal when carrying out portrait, all U.S. face can be carried out to the facial image got, the aesthetics for improving face in image is handled by U.S. face, improves user Viscosity.
In the related technology, U.S. face processing is carried out to facial image and is all based on 2D images, treatment effect is unnatural, the sense of reality Not strong, user satisfaction is not high.
Invention content
The present invention is directed to solve at least some of the technical problems in related technologies.
For this purpose, the present invention proposes a kind of beautifying faces processing method, by building human face three-dimensional model, by face three-dimensional mould Type is compared with preset with reference to threedimensional model, and is carried out according to human face three-dimensional model and with reference to the difference between threedimensional model The adjustment of human face three-dimensional model improves the aesthetics at each position in human face three-dimensional model, realizes people in the three-dimensional model The micro-shaping effect of face, has been turned up the satisfaction of user.
The present invention proposes a kind of beautifying faces processing unit.
The present invention proposes a kind of electronic equipment.
The present invention proposes a kind of computer readable storage medium.
First aspect present invention embodiment proposes a kind of beautifying faces processing method, including:
Obtain two-dimensional facial image and the corresponding depth information of the facial image;
According to the depth information and the facial image, three-dimensionalreconstruction is carried out, human face three-dimensional model is obtained;
By the second key point phase in each first key point relative position in the human face three-dimensional model, with reference threedimensional model Position is compared, determines position difference;Wherein, described with reference to each second key point in threedimensional model and the human face three-dimensional model In the first key point have correspondence;
According to the position difference of each first key point of human face three-dimensional model, the human face three-dimensional model is adjusted.
Second aspect of the present invention embodiment proposes a kind of beautifying faces processing unit, including:
Acquisition module, for obtaining two-dimensional facial image and the corresponding depth information of the facial image;
Reconstructed module obtains face three-dimensional for according to the depth information and the facial image, carrying out three-dimensionalreconstruction Model;
Determining module, for by each first key point relative position in the human face three-dimensional model, and with reference to threedimensional model In the second key point relative position compare, determine position difference;Wherein, described with reference to each second key point and institute in threedimensional model The first key point stated in human face three-dimensional model has correspondence;
Module is adjusted, for the position difference according to each first key point of human face three-dimensional model, to the face three-dimensional mould Type is adjusted.
Third aspect present invention embodiment proposes a kind of electronic equipment, including:Memory, processor and it is stored in storage On device and the computer program that can run on a processor, when the processor executes described program, first party embodiment is realized The beautifying faces processing method.
Fourth aspect present invention embodiment proposes a kind of computer readable storage medium, is stored thereon with computer journey Sequence, the program are performed by processor, realize the beautifying faces processing method described in first party embodiment.
The technical solution that the embodiment of the present invention is provided can include following advantageous effect:
Two-dimensional facial image and the corresponding depth information of facial image are obtained, according to depth information and facial image, Three-dimensionalreconstruction is carried out, human face three-dimensional model is obtained, it is three-dimensional with reference by each first key point relative position in human face three-dimensional model The second key point relative position compares in model, determines position difference, wherein with reference to each second key point and people in threedimensional model The first key point in face three-dimensional model has correspondence, according to the position difference of each first key point of human face three-dimensional model, Human face three-dimensional model is adjusted.By the adjustment of the human face three-dimensional model to acquisition, improve in human face three-dimensional model The aesthetics at each position presents the micro-shaping effect of face, realizes the beautification to face 3 d image in the three-dimensional model.
Description of the drawings
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments Obviously and it is readily appreciated that, wherein:
A kind of flow diagram for beautifying faces processing method that Fig. 1 is provided by the embodiment of the present invention;
Fig. 2 is the frame diagram of human face three-dimensional model provided in an embodiment of the present invention;
Fig. 3 is the flow diagram of another beautifying faces processing method provided in an embodiment of the present invention;
Fig. 4 is the flow diagram of another beautifying faces processing method provided in an embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of beautifying faces processing unit provided in an embodiment of the present invention;
Fig. 6 is the internal structure schematic diagram of electronic equipment 200 in one embodiment;And
Fig. 7 provides the image processing circuit of the embodiment of the present invention.
Specific implementation mode
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings the beautifying faces treating method and apparatus of the embodiment of the present invention is described.
A kind of flow diagram for beautifying faces processing method that Fig. 1 is provided by the embodiment of the present invention.
As shown in Figure 1, this approach includes the following steps:
Step 101, two-dimensional facial image and the corresponding depth information of facial image are obtained.
As a kind of possible realization method, the camera assembly for carrying out Image Acquisition can be the camera shooting that can obtain depth information Component, for example, dual camera, depth camera (Red-Green-Blue Depth) RGBD, structure light/TOF cameras etc., this Place is not listed one by one.
In image acquisition process, face is identified by face recognition technology, using camera assembly from least two not Same angle obtains at least two facial images and the corresponding depth information of facial image.
Step 102, according to depth information and facial image, three-dimensionalreconstruction is carried out, human face three-dimensional model is obtained.
Specifically, key point identification is carried out to each facial image, obtains third key point, wherein third key point Refer to the key point identified in two-dimension human face image, third key point carries the characteristic information of face, to each Facial image determines third according to the plan range of the depth information of third key point and third key point on facial image The relative position of corresponding first key point of key point in three dimensions, wherein the first key point refers in three dimensions The key point of face, the first key point and the key point of the face of third key point instruction have correspondence, are closed according to first The relative position of key point in three dimensions connects other adjacent first key points, generates local facial three-dimensional framework, according to Identical first key point for including in different local facial three-dimensional frameworks, spells different local facial three-dimensional frameworks It connects, obtains the frame of human face three-dimensional model.Fig. 2 is the frame diagram of human face three-dimensional model provided in an embodiment of the present invention, such as Fig. 2 institutes Show, i.e., for according to the depth information of facial image and facial image, three-dimensionalreconstruction obtains human face three-dimensional model figure.
In turn, multiple first areas in human face three-dimensional model frame diagram are obtained, first area is with each first key point For the closed area that vertex obtains, for example, in Fig. 2, each grid is a first area, according in facial image The face graduation of facial image is divided into multiple second areas using third key point as vertex, according to first by three key points Correspondence between key point and third key point determines the corresponding first area of each second area, as a kind of possible Realization method renders the image content shown in each second area, and the image content wash with watercolours shown in each second area After dye, as corresponding first area in local skin texture mapping to human face three-dimensional model, covering threedimensional model frame is obtained Dermatoglyph figure, improve covering human face three-dimensional model frame dermatoglyph figure accuracy, and improve face three-dimensional The vividness of model..
It, can be first by the picture textures shown in second area to face three-dimensional mould as alternatively possible realization method Corresponding first area in type, then the textures picture in first area is rendered, generate the skin for meeting threedimensional model frame Skin texture maps, improve the accuracy of the dermatoglyph figure of covering human face three-dimensional model frame, and improve human face three-dimensional model Vividness.
As another possible realization method, the picture textures that can also be shown in by second area to face three-dimensional When in model, while the picture shown in second area is rendered, generates the dermatoglyph figure for meeting threedimensional model frame, Parallel work-flow improves processing speed, while improving the accuracy of the dermatoglyph figure of covering human face three-dimensional model frame, and Improve the vividness of human face three-dimensional model.
Wherein, for the content rendering intent of exhibiting pictures in second area, as a kind of possible realization method, according to The corresponding material parameters in first area, render the picture shown in second area, obtain the part of corresponding first area Dermatoglyph textures, for example, in 3 dimension (Three Dimensional, 3D) models, the material feature of eye pupil part is logical Saturating glass material, according to the material parameters, the region to corresponding to eye pupil part in second area carries out picture rendering, obtains To the textures of corresponding eye pupil part, the accurate of the local skin texture mapping of the correspondence first area of generation is improved Property, to improve the vividness of human face three-dimensional model.
Step 103, crucial by each first key point relative position in human face three-dimensional model, with reference threedimensional model second Point relative position compares, and determines position difference.
Wherein, it is preset threedimensional model with reference to threedimensional model, for example, the threedimensional model can be standard faces mould The shaping object module etc. that type, star's face model or cosmetician provide, with reference to each second key point and face in threedimensional model The first key point in threedimensional model has correspondence.
Step 104, according to the position difference of each first key point of human face three-dimensional model, human face three-dimensional model is adjusted It is whole.
Specifically, according to the position difference of each first key point of human face three-dimensional model, in human face three-dimensional model each first Key point carries out position adjustment, in turn, in human face three-dimensional model, determines and adjusted first key point associated first Key point carries out position adjustment to associated first key point according to the adjustment amplitude of adjusted first key point of correspondence, Wherein, the adjustment amplitude of associated first key point is less than the adjustment amplitude of corresponding adjusted first key point.
First key point and other adjacent first key points correspond to first area, and position is being carried out to the first key point Adjustment during there is the first area of deformation, by stretch or shrink covering deformation first area local skin texture with Fill the first area after deformation.By the adjustment to faceform, the beauty at each position in human face three-dimensional model is improved Degree, realizes the micro-shaping effect of threedimensional model.
In the beautifying faces processing method of the embodiment of the present invention, obtains two-dimensional facial image and facial image corresponds to Depth information carry out three-dimensionalreconstruction, human face three-dimensional model obtained, by face three-dimensional mould according to depth information and facial image Each first key point relative position in type compares with reference to the second key point relative position in threedimensional model, determines position difference, Wherein, there is correspondence with reference to the first key point in each second key point in threedimensional model and human face three-dimensional model, according to The position difference of each first key point of human face three-dimensional model, is adjusted human face three-dimensional model.Pass through the face three to acquisition The adjustment of dimension module, improves the aesthetics at each position in human face three-dimensional model, realizes the micro- whole of face in the three-dimensional model Shape effect,.
Based on above-described embodiment, the embodiment of the present invention also proposed a kind of possible realization side of beautifying faces processing method Formula further clearly illustrates the position difference according to each first key point, to the process that human face three-dimensional model is adjusted, Fig. 3 is the flow diagram of another beautifying faces processing method provided in an embodiment of the present invention, as shown in figure 3, being based on upper one Embodiment, step 104 can also include following sub-step:
Step 1041, according to the position difference of each first key point of human face three-dimensional model, in human face three-dimensional model each One key point carries out position adjustment.
Specifically, determine that need to adjust first is crucial according to the position difference of each first key point of human face three-dimensional model Point compares position difference and predetermined threshold value as a kind of possible realization method, will be greater than the first key point of threshold value difference It is determined as the first key point for needing to adjust and corresponding adjustment amplitude range, to the first key point that needs adjust, right Position adjustment is carried out in the adjustment amplitude range answered.By more first key point of position difference, it is determined as needing to adjust The first key point, reduce operand, while remaining certain user characteristics.
Step 1042, in human face three-dimensional model, the first key point associated with adjusted first key point is determined, Carry out position adjustment.
Specifically, in human face three-dimensional model, the first key point associated with adjusted first key point is determined, and The distance between associated first key point and corresponding adjusted first key point are determined, according to distance, corresponding to reduction The adjustment amplitude of adjusted first key point obtains the adjustment amplitude of associated first key point, according to the adjustment amplitude pair The first key point associated with the first key point carries out position adjustment, realizes seamlessly transitting for adjustment.
Step 1043, it to there is the first area of deformation during the adjustment of position, stretches or contraction covers the first of deformation The local skin texture in region is to fill the first area after deformation.
Specifically, the first key point is the vertex of first area needs if carrying out position adjustment upwards to the first key point The local skin texture for wanting the first area of tensile deformation, to fill the first area after deformation;If downward to the first key point Position adjustment is carried out, then the local skin texture for shrinking the first area of deformation is needed, to fill the first area after deformation.Make After obtaining the progress position adjustment of the first key point, first area corresponds to and is adjusted so that dermatoglyph trend is more natural.
The beautifying faces processing method of the embodiment of the present invention passes through in the human face three-dimensional model that obtains three-dimensionalreconstruction The second reference point in one key point and reference threedimensional model carries out relative position comparison, position difference is determined, according to each first The position difference of key point, the adjustment to the human face three-dimensional model of acquisition improve the U.S. at each position in human face three-dimensional model Sight degree realizes the micro-shaping effect of face in the three-dimensional model.
It, can also be to being covered in human face three-dimensional model surface after carrying out face three-dimensionalreconstruction and obtaining human face three-dimensional model Dermatoglyph figure is beautified, and is mapped to two-dimensional space after the completion of threedimensional model adjustment and obtained beautifying picture, and is opened up Show, for this purpose, the embodiment of the present invention proposes the possible realization method of another beautifying faces processing method, Fig. 4 is the present invention The flow diagram for another beautifying faces processing method that embodiment provides, as shown in figure 4, this method may include walking as follows Suddenly:
Step 401, two-dimensional facial image and the corresponding depth information of facial image are obtained.
Specifically, when user needs to acquire image, driving photographic device carries out human face scanning, obtains the two-dimentional people of user Face image and the corresponding depth information of facial image.
Step 402, facial image is identified, determines that the face part of facial image does not include jewelry.
Specifically, the two-dimension human face image of the user of acquisition is identified, determines that face part does not include jewelry, example Such as, glasses prompt user to remove glasses if user has worn glasses, to improve the accuracy that follow-up human face three-dimensional model is established.
Step 403, according to depth information and facial image, three-dimensionalreconstruction is carried out, human face three-dimensional model is obtained.
Specifically, the step 102 of Fig. 1 corresponding embodiments is can refer to, is not being repeated herein.
Step 404, the dermatoglyph figure for being covered in human face three-dimensional model surface is beautified.
It is understood that when there is small pox in facial image, the color at the corresponding position of small pox can in dermatoglyph figure Think red, alternatively, when there is freckle in facial image, the color at the corresponding position of freckle can be coffee in dermatoglyph figure Color or black, alternatively, when there is black mole in facial image, the color at the corresponding position of black mole can be black in dermatoglyph figure Color then can be identified as abnormal color because of the color different from normal skin.
It therefore, can be according to the skin of human face three-dimensional model as a kind of possible realization method of the embodiment of the present application The color of texture maps, it is determined whether there are abnormal ranges, can be without any processing when not there are no abnormal ranges, and work as and deposit In abnormal ranges, relative position relation that can further according to each point in abnormal ranges in three dimensions, Yi Jiyi The colouring information of normal range beautifies abnormal ranges using corresponding beautification strategy.
Under normal circumstances, small pox is prominent skin surface, and black mole can also be prominent skin surface, and freckle is not Prominent skin surface, therefore, as a kind of possible realization method, according between the central point and marginal point of abnormal ranges Difference in height determines the Exception Type belonging to abnormal ranges, and Exception Type can be plane type, for example, speckle, birthmark etc. do not protrude Skin surface, or be stereoscopic type, for example, the prominent skin surface such as small pox, mole.Exception class belonging to abnormal ranges Type and colouring information determine corresponding beautification strategy, are handled for mill skin in the present embodiment.
In turn, according to the normal skin tone in dermatoglyph figure, the corresponding matching colour of skin of abnormal ranges is determined, according to abnormal model The corresponding matching colour of skin is enclosed, mill skin processing is carried out to abnormal ranges using the filter range and filtering strength of beautification strategy instruction, For example, if it is acne, speckle, this kind of anomalous effects aesthetics, therefore, user wants to weaken, then corresponding beautification strategy Middle mill skin degree is stronger;If it is tatoo, birthmark either mole, color is deeper, and this kind of exception is commonly referred to be user characteristics, no Beauty is influenced, usual user wants to retain these user characteristics, then mill skin degree is weaker in corresponding beautification strategy.
Alternatively, the colour of skin in abnormal ranges can also be filled according to the abnormal ranges corresponding matching colour of skin.
For example, when Exception Type is protrusion, when colouring information is red, at this point, can be small pox in the abnormal ranges, then The beautification strategy of anti-acne can be:Mill skin processing is carried out to small pox, and can be according to the normal skin tone near small pox, the application It is denoted as the matching colour of skin in embodiment, fills the colour of skin in the corresponding abnormal ranges of small pox, alternatively, when Exception Type is not raised, When color is coffee-like, at this point, can be freckle in the abnormal ranges, then the beautification strategy of nti-freckle can be:It is attached according to freckle Close normal skin tone is denoted as the matching colour of skin, the colour of skin in the corresponding abnormal ranges of filling freckle in the embodiment of the present application.
Step 405, crucial by each first key point relative position in human face three-dimensional model, with reference threedimensional model second Point relative position compares, and determines position difference.
Step 406, according to the position difference of each first key point of human face three-dimensional model, human face three-dimensional model is adjusted It is whole.
For example, according to the position difference of corresponding first key point of nose in human face three-dimensional model, it may be determined that user's nose Son the bridge of the nose whether Gao Ting, if user's bridge of the nose is relatively low, according to corresponding first key point of nose in human face three-dimensional model Position difference determines the range for needing to adjust, corresponding herein to be adjusted to be turned up, and to realize, the bridge of the nose increases after adjustment, realizes people The micro-shaping adjustment of the bridge of the nose, improves the aesthetics of nose, similarly, it can be achieved that the filling of forehead, apple flesh in face three-dimensional model Realization, mandible reduce etc. micro-shapings effect, realization principle is identical, and details are not described herein again.By to human face three-dimensional model into Row adjustment, realizes the micro-shaping effect of face, improves the aesthetics of face in human face three-dimensional model.
Step 407, the adjustment carried out according to human face three-dimensional model generates user preference.
Specifically, according to the adjustment to human face three-dimensional model, the preference of the user is generated, user preference is used to indicate first The adjustment amplitude range of key point.
Step 408, identify that shooting image is matched with the human face three-dimensional model before adjustment when if shooting, using user preference The corresponding user's threedimensional model of shooting image is adjusted.
In a kind of possible scene, user carries out recognition of face when shooting, according to collected facial image, If identifying, shooting image is matched with the human face three-dimensional model before adjustment, it is determined that the user has generated user preference, can root The corresponding user's threedimensional model of shooting image is adjusted according to the user preference.
For example, user can carry out face registration in advance, the process of registration corresponds to step 401- steps in the present embodiment 406, by face registration, the adjustment of human face three-dimensional model is completed, user preference is generated, so that user takes pictures again When, directly human face three-dimensional model can be adjusted using generated user preference, to save the time of adjustment, improve place Manage efficiency.
Step 409, user's threedimensional model after adjustment is mapped into two-dimensional space and obtains beautifying picture, and to beautifying picture It is shown.
Specifically, beautify to the dermatoglyph figure for being covered in human face three-dimensional model surface, the people after being beautified After face three-dimensional model, the human face three-dimensional model after beautification can be mapped to two dimensional surface, the facial image after being beautified.This In embodiment, since dermatoglyph figure is three-dimensional, dermatoglyph figure is beautified, can make the dermatoglyph after beautification Figure more naturally, to by beautification after human face three-dimensional model be mapped to two dimensional surface, the two-dimension human face image after being beautified, The two-dimension human face image after beautification can be made truer, landscaping effect is more prominent.
In the beautifying faces processing method of the embodiment of the present invention, obtains two-dimensional facial image and facial image corresponds to Depth information carry out three-dimensionalreconstruction, human face three-dimensional model obtained, to face three-dimensional mould according to depth information and facial image The dermatoglyph figure on type surface is beautified, and improves the finish and beautification degree of face skin, and will be in human face three-dimensional model Each first key point relative position compares with reference to the second key point relative position in threedimensional model, determines position difference, In, there is correspondence with reference to the first key point in each second key point in threedimensional model and human face three-dimensional model, according to people The position difference of each first key point of face three-dimensional model, is adjusted human face three-dimensional model.It is three-dimensional by the face to acquisition Model is beautified, and the beautification degree of dermatoglyph in human face three-dimensional model is improved, according to the position difference of each first reference point, Adjustment to human face three-dimensional model improves the aesthetics at each position in human face three-dimensional model, realizes in the three-dimensional model The micro-shaping effect of face, and process is adjusted according to threedimensional model and generates user preference, according to the user preference to shooting image Corresponding threedimensional model is adjusted, and improves the efficiency of adjustment, and according to the corresponding beautification of obtaining three-dimensional model after adjustment Image afterwards so that beautifying picture more true nature.
In order to realize that above-described embodiment, the present invention also propose a kind of beautifying faces processing unit.
Fig. 5 is a kind of structural schematic diagram of beautifying faces processing unit provided in an embodiment of the present invention.
As shown in figure 5, the device includes:Acquisition module 51, reconstructed module 52, determining module 53 and adjustment module 54.
Acquisition module 51, for obtaining two-dimensional facial image and the corresponding depth information of facial image.
Reconstructed module 52, for according to depth information and facial image, carrying out three-dimensionalreconstruction, obtaining human face three-dimensional model.
Determining module 53, for will each first key point relative position in human face three-dimensional model, and with reference in threedimensional model Second key point relative position compares, and determines position difference, wherein three-dimensional with face with reference to each second key point in threedimensional model The first key point in model has correspondence.
Module 54 is adjusted, for the position difference according to each first key point of human face three-dimensional model, to human face three-dimensional model It is adjusted.
Further, in a kind of possible realization method of the embodiment of the present invention, above-mentioned adjustment module 54 can also wrap It includes:Adjustment unit and fills unit.
Adjustment unit, for the position difference according to each first key point of human face three-dimensional model, in human face three-dimensional model Each first key point carries out position adjustment;
Fills unit, for there is the first area of deformation during the adjustment of position, stretching or shrinking covering deformation The local skin texture of first area is to fill the first area after deformation.
As a kind of possible realization method, adjustment unit specifically can be used for:
It is determined according to the position difference of each first key point of human face three-dimensional model and needs the first key point for adjusting and right The adjustment amplitude range answered;
To the first key point that needs adjust, position adjustment is carried out in corresponding adjustment amplitude range.
As alternatively possible realization method, adjustment unit specifically can be also used for:
In human face three-dimensional model, the first key point associated with adjusted first key point is determined;
Position adjustment is carried out according to the adjustment amplitude of adjusted first key point of correspondence to associated first key point; Wherein, the adjustment amplitude of associated first key point is less than the adjustment amplitude of corresponding adjusted first key point.
As alternatively possible realization method, adjustment unit specifically can be also used for:
Determine the distance between associated first key point and corresponding adjusted first key point;
According to distance, reduces the adjustment amplitude of corresponding adjusted first key point, obtain associated first key point Adjustment amplitude.
Further, as a kind of possible realization method of the present embodiment, which can also include:Beautification shows mould Block is specifically used for:
According to the adjustment that the human face three-dimensional model carries out, user preference is generated;User preference is used to indicate the first key The adjustment amplitude range of point;
If being identified when shooting, shooting image is matched with the human face three-dimensional model before adjustment;
Using user preference, the corresponding user's threedimensional model of shooting image is adjusted;
User's threedimensional model after adjustment is mapped into two-dimensional space and obtains beautifying picture, and beautifying picture is shown Show.
As a kind of possible realization method of the present embodiment, above-mentioned acquisition module 51 is specifically used for:
During video acquisition, the angle different from least two obtains at least two facial images.
As a kind of possible realization method of the present embodiment, above-mentioned reconstructed module 52 is specifically used for:
Key point identification is carried out to each facial image, obtains third key point;
It is flat on facial image according to the depth information of third key point and third key point to each facial image Identity distance is from determining the relative position of corresponding first key point of third key point in three dimensions;Existed according to the first key point Relative position in three dimensions connects the first adjacent key point, generates local facial three-dimensional framework;
It is three-dimensional to different local facials according to identical first key point for including in different local facial three-dimensional frameworks Frame is spliced, and the frame of human face three-dimensional model is obtained;
Facial image is mapped to the frame of human face three-dimensional model, obtains the dermatoglyph figure of covering framework.
As the alternatively possible realization method of the present embodiment, above-mentioned reconstructed module 52 is additionally operable to:
Obtain multiple first areas in frame;First area is the enclosed area obtained as vertex using each first key point Domain;
According to the third key point in facial image, it is top that the face graduation of facial image, which is divided into third key point, Multiple second areas of point;
According to the correspondence between the first key point and third key point, corresponding firstth area of each second area is determined Domain;
After being rendered to the image content shown in each second area, as local skin texture mapping to human face three-dimensional model In corresponding first area, obtain the dermatoglyph figure of covering framework.
Further, as a kind of possible realization method of the embodiment of the present invention, which can also include:
Beautify module, for beautifying to the dermatoglyph figure for being covered in human face three-dimensional model surface.
As a kind of possible realization method, beautifies module, can be also used for:
According to the colouring information of the dermatoglyph figure of human face three-dimensional model, detects and there is abnormal exception in dermatoglyph figure Range;
According to the color of the relative position relation in three dimensions of each point in abnormal ranges and abnormal ranges letter Breath beautifies abnormal ranges using corresponding beautification strategy.
As alternatively possible realization method, beautifies module, specifically can be also used for:
According to the difference in height between the central point and marginal point of abnormal ranges, the Exception Type belonging to abnormal ranges is determined;
According to Exception Type and colouring information, corresponding beautification strategy is determined;
According to the corresponding matching colour of skin of abnormal ranges, using the filter range and filtering strength of beautification strategy instruction to exception Range carries out mill skin processing.
As a kind of possible realization method of the present embodiment, which further includes identification module,
Identification module determines that the face part of facial image does not include jewelry for facial image to be identified.
It should be noted that the aforementioned device for being also applied for the embodiment to the explanation of embodiment of the method, is realized former Manage similar, details are not described herein again.
The division of module is only used in above-mentioned beautifying faces processing unit for example, in other embodiments, it can Beautifying faces processing unit to be divided into different modules as required, to complete the part of beautifying faces processing unit or complete Portion's function.
In the beautifying faces processing unit of the embodiment of the present invention, obtains two-dimensional facial image and facial image corresponds to Depth information carry out three-dimensionalreconstruction, human face three-dimensional model obtained, by face three-dimensional mould according to depth information and facial image Each first key point relative position in type compares with reference to the second key point relative position in threedimensional model, determines position difference, Wherein, there is correspondence with reference to the first key point in each second key point in threedimensional model and human face three-dimensional model, according to The position difference of each first key point of human face three-dimensional model, is adjusted human face three-dimensional model, realizes in the three-dimensional model The micro-shaping effect of face has been turned up the satisfaction of user, and has adjusted process according to threedimensional model and generate user preference, according to this User preference is adjusted the corresponding threedimensional model of shooting image, improves the efficiency of adjustment, and according to the three-dimensional after adjustment Model obtains the image after corresponding beautification so that beautifying picture more true nature.
In order to realize that above-described embodiment, the embodiment of the present invention also proposed a kind of electronic equipment, including:Memory, processing Device and storage on a memory and the computer program that can run on a processor, when the processor executes described program, reality Beautifying faces processing method described in existing preceding method embodiment.
Fig. 6 is the internal structure schematic diagram of electronic equipment 200 in one embodiment.The electronic equipment 200 includes passing through to be Processor 220, memory 230, display 240 and the input unit 250 that bus 210 of uniting connects.Wherein, electronic equipment 200 Memory 230 is stored with operating system and computer-readable instruction.The computer-readable instruction can be executed by processor 220, with Realize the beautifying faces processing method of the application embodiment.For the processor 220 for providing calculating and control ability, support is whole The operation of a electronic equipment 200.The display 240 of electronic equipment 200 can be liquid crystal display or electric ink display screen Deng, input unit 250 can be the touch layer covered on display 240, can also be arranged on 200 shell of electronic equipment by Key, trace ball or Trackpad can also be external keyboard, Trackpad or mouse etc..The electronic equipment 200 can be mobile phone, Tablet computer, laptop, personal digital assistant or Wearable (such as Intelligent bracelet, smartwatch, intelligent helmet, Intelligent glasses) etc..
It will be understood by those skilled in the art that structure shown in Fig. 6, is only tied with the relevant part of application scheme The schematic diagram of structure does not constitute the restriction for the electronic equipment 200 being applied thereon to application scheme, specific electronic equipment 200 may include either combining certain components or with different component cloth than more or fewer components as shown in the figure It sets.
For clear explanation electronic equipment provided in this embodiment, referring to Fig. 7, providing the image of the embodiment of the present invention Processing circuit, image processing circuit can utilize hardware and or software component to realize, as image processing circuit specifically includes in Fig. 7 ISP (Image Signal Processing, picture signal processing) processor 310.Image processing circuit still further comprises figure As sensor 320, structured light sensor 330, depth map generate chip 340, encoder 350, CPU360, GPU (Graphics Processing Unit, graphics processor) 370, display 380 and memory 390.
It should be noted that Fig. 7 is as a kind of schematic diagram of the image processing circuit of possible realization method.For ease of Illustrate, only shows and the relevant various aspects of the embodiment of the present application.
As shown in fig. 7, the raw image data that imaging sensor 320 captures is handled by ISP processors 310 first, at ISP Reason device 310 is analyzed raw image data to capture the one or more control ginsengs that can be used for determining imaging sensor 320 Several image statistics include the facial image of yuv format or rgb format.Wherein, imaging sensor 320 may include color Color filter array (such as Bayer filters) and corresponding photosensitive unit, imaging sensor 320 can obtain each photosensitive unit and catch The luminous intensity and wavelength information caught, and the one group of raw image data that can be handled by ISP processors 310 is provided.ISP processors 310 pairs of raw image datas obtain the facial image of yuv format or rgb format after handling, and are sent to CPU360.
Wherein, ISP processors 310, can in various formats pixel by pixel when handling raw image data Handle raw image data.For example, each image pixel can be with the bit depth of 8,10,12 or 14 bits, ISP processors 310 One or more image processing operations can be carried out to raw image data, collect the statistical information about image data.Wherein, scheme As processing operation can be carried out by identical or different bit depth precision.
As shown in fig. 7, structured light sensor 330 projects pattern light to object, and obtain the knot of object reflection Structure light obtains infrared speckle pattern according to the structure light imaging of reflection.The infrared speckle pattern is sent to by structured light sensor 330 Depth map generates chip 340, so that depth map generates the metamorphosis feelings that chip 340 determines according to infrared speckle pattern structure light Condition, and then determine therefrom that the depth of object, depth map (Depth Map) is obtained, which indicates in infrared speckle pattern The depth of each pixel.Depth map generates chip 340 and depth map is sent to CPU360.
CPU360 gets facial image from ISP processors 310, and generating chip 340 from depth map gets depth map, ties The nominal data being previously obtained is closed, facial image can be aligned with depth map, so that it is determined that going out each pixel in facial image Corresponding depth information.In turn, CPU360 carries out three-dimensionalreconstruction, obtains face three-dimensional mould according to depth information and facial image Type.
Human face three-dimensional model is sent to GPU370 by CPU360, so that GPU370 is executed according to human face three-dimensional model as aforementioned Method described in embodiment realizes beautifying faces, the facial image after being beautified.After the beautification that GPU370 processing obtains Facial image can show by display 380, and/or, it is stored to memory 390 after being encoded by encoder 350, wherein coding Device 350 can be realized by coprocessor.
In one embodiment, memory 390 can be multiple, or be divided into multiple memory spaces, store GPU370 Image data that treated can be stored to private memory or dedicated memory space, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.Memory 390 can be configured as realizing one or more frame buffers.
For example, following for the processor 220 in Fig. 6 or image processing circuit (the specially CPU360 in utilization Fig. 7 And GPU370) realize control method the step of:
CPU360 obtains two-dimensional facial image and the corresponding depth information of the facial image;CPU360 is according to institute Depth information and the facial image are stated, three-dimensionalreconstruction is carried out, obtains human face three-dimensional model;GPU370 is by the face three-dimensional mould Each first key point relative position in type compares with reference to the second key point relative position in threedimensional model, determines position difference; Wherein, described that there is corresponding close with the first key point in the human face three-dimensional model with reference to each second key point in threedimensional model System;GPU370 is adjusted the human face three-dimensional model according to the position difference of each first key point of human face three-dimensional model.
In order to realize that above-described embodiment, the embodiment of the present invention also proposed a kind of computer readable storage medium, deposit thereon Computer program is contained, which is performed by processor, realizes the beautifying faces processing side described in preceding method embodiment Method.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples It closes and combines.
In addition, term " first ", " second " are used for description purposes only, it is not understood to indicate or imply relative importance Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present invention, the meaning of " plurality " is at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (system of such as computer based system including processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicating, propagating or passing Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the present invention can be realized with hardware, software, firmware or combination thereof.Above-mentioned In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be executed with storage Or firmware is realized.Such as, if realized in another embodiment with hardware, following skill well known in the art can be used Any one of art or their combination are realized:With for data-signal realize logic function logic gates from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, it can also That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the present invention System, those skilled in the art can be changed above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (16)

1. a kind of beautifying faces processing method, which is characterized in that the described method comprises the following steps:
Obtain two-dimensional facial image and the corresponding depth information of the facial image;
According to the depth information and the facial image, three-dimensionalreconstruction is carried out, human face three-dimensional model is obtained;
By each first key point relative position in the human face three-dimensional model, position opposite with the second key point in reference threedimensional model Comparison is set, determines position difference;Wherein, described with reference in each second key point in threedimensional model and the human face three-dimensional model First key point has correspondence;
According to the position difference of each first key point of human face three-dimensional model, the human face three-dimensional model is adjusted.
2. beautifying faces processing method according to claim 1, which is characterized in that described according to human face three-dimensional model each The position difference of one key point is adjusted the human face three-dimensional model, including:
It is crucial in the human face three-dimensional model each first to click through according to the position difference of each first key point of human face three-dimensional model Line position sets adjustment;
To there is the first area of deformation during the adjustment of position, the part for the first area for covering the deformation is stretched or shunk Dermatoglyph is to fill the first area after deformation.
3. beautifying faces processing method according to claim 2, which is characterized in that described according to human face three-dimensional model each The position difference of one key point carries out position adjustment to each first key point in the human face three-dimensional model, including:
It is determined according to the position difference of each first key point of human face three-dimensional model and needs the first key point for adjusting and corresponding Adjust amplitude range;
To first key point for needing to adjust, position adjustment is carried out in corresponding adjustment amplitude range.
4. beautifying faces processing method according to claim 3, which is characterized in that described to need to adjust first Key point, in corresponding adjustment amplitude range carry out position adjustment after, further include:
In the human face three-dimensional model, the first key point associated with adjusted first key point is determined;
Position adjustment is carried out according to the adjustment amplitude of adjusted first key point of correspondence to associated first key point;Its In, the adjustment amplitude of associated first key point is less than the adjustment amplitude of corresponding adjusted first key point.
5. beautifying faces processing method according to claim 4, which is characterized in that it is described to associated first key point, According to the adjustment amplitude of adjusted first key point of correspondence, position adjustment is carried out, including:
Determine the distance between associated first key point and corresponding adjusted first key point;
According to the distance, reduce the adjustment amplitude of corresponding adjusted first key point, obtains described associated first and close The adjustment amplitude of key point.
6. according to claim 1-5 any one of them beautifying faces processing methods, which is characterized in that the method further includes:
According to the adjustment that the human face three-dimensional model carries out, user preference is generated;The user preference is used to indicate described first The adjustment amplitude range of key point;
If being identified when shooting, shooting image is matched with the human face three-dimensional model before adjustment;
Using the user preference, the corresponding user's threedimensional model of the shooting image is adjusted;
User's threedimensional model after adjustment is mapped into two-dimensional space and obtains beautifying picture, and the beautifying picture is shown Show.
7. according to claim 1-5 any one of them beautifying faces processing methods, which is characterized in that described to obtain two-dimensional people Face image, including:
During video acquisition, the angle different from least two obtains at least two facial images.
8. beautifying faces processing method according to claim 7, which is characterized in that described according to the depth information and institute Facial image is stated, three-dimensionalreconstruction is carried out, obtains human face three-dimensional model, including:
Key point identification is carried out to each facial image, obtains third key point;
It is flat on the facial image according to the depth information of third key point and third key point to each facial image Identity distance is from determining the relative position of corresponding first key point of the third key point in three dimensions;According to described first The relative position of key point in three dimensions connects the first adjacent key point, generates local facial three-dimensional framework;
According to identical first key point for including in different local facial three-dimensional frameworks, to different local facial three-dimensional frameworks Spliced, obtains the frame of the human face three-dimensional model;
The facial image is mapped to the frame of the human face three-dimensional model, obtains the dermatoglyph figure for covering the frame.
9. beautifying faces processing method according to claim 8, which is characterized in that described to be mapped to the facial image The frame of the human face three-dimensional model obtains the dermatoglyph figure for covering the frame, including:
Obtain multiple first areas in the frame;The first area is the envelope obtained as vertex using each first key point Closed region;
According to the third key point in the facial image, the face graduation of the facial image is divided into third key point For multiple second areas on vertex;
According to the correspondence between the first key point and third key point, corresponding firstth area of each second area is determined Domain;
After being rendered to the image content shown in each second area, as local skin texture mapping to the human face three-dimensional model In corresponding first area, obtain the dermatoglyph figure for covering the frame.
10. according to claim 1-5 any one of them beautifying faces processing methods, which is characterized in that described to obtain face three After dimension module, further include:
Dermatoglyph figure to being covered in the human face three-dimensional model surface beautifies.
11. beautifying faces processing method according to claim 10, which is characterized in that described pair is covered in the face three The dermatoglyph figure on dimension module surface is beautified, including:
According to the colouring information of the dermatoglyph figure of the human face three-dimensional model, detects and there is exception in the dermatoglyph figure Abnormal ranges;
According to the relative position relation in three dimensions of each point in the abnormal ranges and the color of the abnormal ranges Information beautifies the abnormal ranges using corresponding beautification strategy.
12. beautifying faces processing method according to claim 11, which is characterized in that described according in the abnormal ranges Each point relative position relation in three dimensions and the abnormal ranges colouring information, using corresponding beautification plan Slightly, the abnormal ranges are beautified, including:
According to the difference in height between the central point and marginal point of the abnormal ranges, the exception class belonging to the abnormal ranges is determined Type;
According to the Exception Type and the colouring information, corresponding beautification strategy is determined;
According to the corresponding matching colour of skin of the abnormal ranges, using the filter range and filtering strength pair of the beautification strategy instruction The abnormal ranges carry out mill skin processing.
13. according to claim 1-5 any one of them beautifying faces processing methods, which is characterized in that the acquisition is two-dimensional After facial image, further include:
The facial image is identified, determines that the face part of the facial image does not include jewelry.
14. a kind of beautifying faces processing unit, which is characterized in that described device includes:
Acquisition module, for obtaining two-dimensional facial image and the corresponding depth information of the facial image;
Reconstructed module, for according to the depth information and the facial image, carrying out three-dimensionalreconstruction, obtaining face three-dimensional mould Type;
Determining module, for by each first key point relative position in the human face three-dimensional model, and with reference in threedimensional model the Two key point relative positions compare, and determine position difference;Wherein, described with reference to each second key point in threedimensional model and the people The first key point in face three-dimensional model has correspondence;
Adjust module, for according to the position difference of each first key point of human face three-dimensional model, to the human face three-dimensional model into Row adjustment.
15. a kind of electronic equipment, which is characterized in that including:Memory, processor and storage are on a memory and can be in processor The computer program of upper operation when the processor executes described program, realizes the people as described in any in claim 1-13 Face landscaping treatment method.
16. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The beautifying faces processing method as described in any in claim 1-13 is realized when execution.
CN201810549499.XA 2018-05-31 2018-05-31 Beautifying faces treating method and apparatus Pending CN108550185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810549499.XA CN108550185A (en) 2018-05-31 2018-05-31 Beautifying faces treating method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810549499.XA CN108550185A (en) 2018-05-31 2018-05-31 Beautifying faces treating method and apparatus

Publications (1)

Publication Number Publication Date
CN108550185A true CN108550185A (en) 2018-09-18

Family

ID=63511572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810549499.XA Pending CN108550185A (en) 2018-05-31 2018-05-31 Beautifying faces treating method and apparatus

Country Status (1)

Country Link
CN (1) CN108550185A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272466A (en) * 2018-09-19 2019-01-25 维沃移动通信有限公司 A kind of tooth beautification method and device
CN109300188A (en) * 2018-10-23 2019-02-01 北京旷视科技有限公司 Threedimensional model processing method and processing device
CN109325929A (en) * 2018-10-17 2019-02-12 联想(北京)有限公司 Image processing method and electronic equipment
CN109584146A (en) * 2018-10-15 2019-04-05 深圳市商汤科技有限公司 U.S. face treating method and apparatus, electronic equipment and computer storage medium
CN109657539A (en) * 2018-11-05 2019-04-19 深圳前海达闼云端智能科技有限公司 Face value evaluation method and device, readable storage medium and electronic equipment
CN110288705A (en) * 2019-07-02 2019-09-27 北京字节跳动网络技术有限公司 The method and apparatus for generating threedimensional model
CN110502993A (en) * 2019-07-18 2019-11-26 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
WO2020062532A1 (en) * 2018-09-28 2020-04-02 北京市商汤科技开发有限公司 Face image processing method and apparatus, electronic device, and storage medium
CN111105343A (en) * 2018-10-26 2020-05-05 Oppo广东移动通信有限公司 Method and device for generating three-dimensional model of object
CN111144169A (en) * 2018-11-02 2020-05-12 深圳比亚迪微电子有限公司 Face recognition method and device and electronic equipment
CN111370100A (en) * 2020-03-11 2020-07-03 深圳小佳科技有限公司 Face-lifting recommendation method and system based on cloud server
CN111797656A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Face key point detection method and device, storage medium and electronic equipment
CN112509005A (en) * 2020-12-10 2021-03-16 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113240802A (en) * 2021-06-23 2021-08-10 中移(杭州)信息技术有限公司 Three-dimensional reconstruction whole-house virtual dimension installing method, device, equipment and storage medium
CN113657357A (en) * 2021-10-20 2021-11-16 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100328307A1 (en) * 2009-06-25 2010-12-30 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN102999942A (en) * 2012-12-13 2013-03-27 清华大学 Three-dimensional face reconstruction method
CN104899905A (en) * 2015-06-08 2015-09-09 深圳市诺比邻科技有限公司 Face image processing method and apparatus
US20170019597A1 (en) * 2015-07-16 2017-01-19 Canon Kabushiki Kaisha Light-emission control apparatus and method for the same
CN106503606A (en) * 2015-09-08 2017-03-15 宏达国际电子股份有限公司 Face image adjustment system and face image method of adjustment
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN107481317A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 The facial method of adjustment and its device of face 3D models
CN107480615A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 U.S. face processing method, device and mobile device
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100328307A1 (en) * 2009-06-25 2010-12-30 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN102999942A (en) * 2012-12-13 2013-03-27 清华大学 Three-dimensional face reconstruction method
CN104899905A (en) * 2015-06-08 2015-09-09 深圳市诺比邻科技有限公司 Face image processing method and apparatus
US20170019597A1 (en) * 2015-07-16 2017-01-19 Canon Kabushiki Kaisha Light-emission control apparatus and method for the same
CN106503606A (en) * 2015-09-08 2017-03-15 宏达国际电子股份有限公司 Face image adjustment system and face image method of adjustment
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN107481317A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 The facial method of adjustment and its device of face 3D models
CN107480615A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 U.S. face processing method, device and mobile device
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272466A (en) * 2018-09-19 2019-01-25 维沃移动通信有限公司 A kind of tooth beautification method and device
US11341768B2 (en) 2018-09-28 2022-05-24 Beijing Sensetime Technology Development Co., Ltd. Face image processing method and apparatus, electronic device, and storage medium
TWI718631B (en) * 2018-09-28 2021-02-11 大陸商北京市商湯科技開發有限公司 Method, device and electronic apparatus for face image processing and storage medium thereof
US11734804B2 (en) 2018-09-28 2023-08-22 Beijing Sensetime Technology Development Co., Ltd. Face image processing method and apparatus, electronic device, and storage medium
US11741583B2 (en) 2018-09-28 2023-08-29 Beijing Sensetime Technology Development Co., Ltd. Face image processing method and apparatus, electronic device, and storage medium
WO2020062532A1 (en) * 2018-09-28 2020-04-02 北京市商汤科技开发有限公司 Face image processing method and apparatus, electronic device, and storage medium
CN109584146A (en) * 2018-10-15 2019-04-05 深圳市商汤科技有限公司 U.S. face treating method and apparatus, electronic equipment and computer storage medium
CN109325929A (en) * 2018-10-17 2019-02-12 联想(北京)有限公司 Image processing method and electronic equipment
CN109300188A (en) * 2018-10-23 2019-02-01 北京旷视科技有限公司 Threedimensional model processing method and processing device
CN111105343A (en) * 2018-10-26 2020-05-05 Oppo广东移动通信有限公司 Method and device for generating three-dimensional model of object
CN111105343B (en) * 2018-10-26 2023-06-09 Oppo广东移动通信有限公司 Method and device for generating three-dimensional model of object
CN111144169A (en) * 2018-11-02 2020-05-12 深圳比亚迪微电子有限公司 Face recognition method and device and electronic equipment
CN109657539B (en) * 2018-11-05 2022-01-25 达闼机器人有限公司 Face value evaluation method and device, readable storage medium and electronic equipment
CN109657539A (en) * 2018-11-05 2019-04-19 深圳前海达闼云端智能科技有限公司 Face value evaluation method and device, readable storage medium and electronic equipment
CN111797656A (en) * 2019-04-09 2020-10-20 Oppo广东移动通信有限公司 Face key point detection method and device, storage medium and electronic equipment
CN111797656B (en) * 2019-04-09 2023-08-22 Oppo广东移动通信有限公司 Face key point detection method and device, storage medium and electronic equipment
CN110288705A (en) * 2019-07-02 2019-09-27 北京字节跳动网络技术有限公司 The method and apparatus for generating threedimensional model
CN110288705B (en) * 2019-07-02 2023-08-04 北京字节跳动网络技术有限公司 Method and device for generating three-dimensional model
CN110502993A (en) * 2019-07-18 2019-11-26 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN111370100A (en) * 2020-03-11 2020-07-03 深圳小佳科技有限公司 Face-lifting recommendation method and system based on cloud server
WO2022121577A1 (en) * 2020-12-10 2022-06-16 北京达佳互联信息技术有限公司 Image processing method and apparatus
CN112509005B (en) * 2020-12-10 2023-01-20 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112509005A (en) * 2020-12-10 2021-03-16 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113240802A (en) * 2021-06-23 2021-08-10 中移(杭州)信息技术有限公司 Three-dimensional reconstruction whole-house virtual dimension installing method, device, equipment and storage medium
CN113657357B (en) * 2021-10-20 2022-02-25 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113657357A (en) * 2021-10-20 2021-11-16 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2023066120A1 (en) * 2021-10-20 2023-04-27 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN108550185A (en) Beautifying faces treating method and apparatus
CN108764180A (en) Face identification method, device, electronic equipment and readable storage medium storing program for executing
CN108447017A (en) Face virtual face-lifting method and device
CN108765273A (en) The virtual lift face method and apparatus that face is taken pictures
US11250241B2 (en) Face image processing methods and apparatuses, and electronic devices
CN108876709A (en) Method for beautifying faces, device, electronic equipment and readable storage medium storing program for executing
CN108876708A (en) Image processing method, device, electronic equipment and storage medium
CN108229279A (en) Face image processing process, device and electronic equipment
CN108765272A (en) Image processing method, device, electronic equipment and readable storage medium storing program for executing
CN107977940A (en) background blurring processing method, device and equipment
US20100189357A1 (en) Method and device for the virtual simulation of a sequence of video images
CN107705248A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN109147024A (en) Expression replacing options and device based on threedimensional model
CN108876886B (en) Image processing method and device and computer equipment
CN107563304A (en) Unlocking terminal equipment method and device, terminal device
CN102663741B (en) Method for carrying out visual stereo perception enhancement on color digit image and system thereof
CN109801380A (en) A kind of method, apparatus of virtual fitting, storage medium and computer equipment
CN109191584A (en) Threedimensional model processing method, device, electronic equipment and readable storage medium storing program for executing
CN110163832A (en) Face fusion method, apparatus and terminal
CN111066026B (en) Techniques for providing virtual light adjustment to image data
CN109242760A (en) Processing method, device and the electronic equipment of facial image
WO2020034698A1 (en) Three-dimensional model-based special effect processing method and device, and electronic apparatus
CN109102559A (en) Threedimensional model treating method and apparatus
CN109785228B (en) Image processing method, image processing apparatus, storage medium, and server
CN109191393A (en) U.S. face method based on threedimensional model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180918