CN109584146A - U.S. face treating method and apparatus, electronic equipment and computer storage medium - Google Patents

U.S. face treating method and apparatus, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN109584146A
CN109584146A CN201811199000.3A CN201811199000A CN109584146A CN 109584146 A CN109584146 A CN 109584146A CN 201811199000 A CN201811199000 A CN 201811199000A CN 109584146 A CN109584146 A CN 109584146A
Authority
CN
China
Prior art keywords
face
dimensional
semanteme
model
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811199000.3A
Other languages
Chinese (zh)
Inventor
戴立根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201811199000.3A priority Critical patent/CN109584146A/en
Publication of CN109584146A publication Critical patent/CN109584146A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Abstract

The embodiment of the present application discloses a kind of U.S. face treating method and apparatus, electronic equipment and computer storage medium, wherein method includes: acquisition three-dimensional face model;The U.S. face feature of semanteme is obtained, and the three-dimensional face model is adjusted based on the mapping relations between the U.S. face feature of semanteme and three-dimensional face model according to the U.S. face feature of semanteme, the three-dimensional face model after obtaining U.S. face.The embodiment of the present application can obtain better beauty Yan Xiaoguo.

Description

U.S. face treating method and apparatus, electronic equipment and computer storage medium
Technical field
This application involves computer vision technique, especially a kind of U.S. face treating method and apparatus, electronic equipment and calculating Machine storage medium.
Background technique
Currently, U.S. face processing usually modifies to two dimensional image, and such as: the region of eye on two dimensional image is drawn Greatly, the beautiful Yan Xiaoguo of big eye is obtained.But the amplitude that this U.S. face processing method changes when modifying to two dimensional image is not Can be larger, it otherwise will cause distortion, and change when face is with biggish angle complex, be difficult by X-Y scheme Preferable effect is obtained as carrying out U.S. face processing.
Summary of the invention
The embodiment of the present application provides a kind of U.S. face processing technique scheme.
According to the one aspect of the embodiment of the present application, a kind of U.S. face processing method is provided, comprising:
Obtain three-dimensional face model;
The U.S. face feature of semanteme is obtained, and according to the U.S. face feature of semanteme, based on the U.S. face feature of semanteme and three-dimensional face mould Mapping relations between type are adjusted the three-dimensional face model, the three-dimensional face model after obtaining U.S. face.
Optionally, in the application above method embodiment, before the acquisition three-dimensional face model, further includes: be based on Two-dimension human face image, which is rebuild, obtains the three-dimensional face model.
Optionally, in any of the above-described embodiment of the method for the application, the U.S. face feature of semanteme of the acquisition, and according to the beauty The face feature of semanteme, based on the mapping relations between the U.S. face feature of semanteme and three-dimensional face model, to the three-dimensional face model into Row adjustment, comprising:
Obtain the variable quantity of U.S. face feature of semanteme parameter;
According to the variable quantity of the U.S. face feature of semanteme parameter, based between the U.S. face feature of semanteme and three-dimensional face model Mapping relations determine the variable quantity of the three-dimensional face model;
According to the variable quantity of the three-dimensional face model, the three-dimensional face model is updated, the three-dimensional people after obtaining U.S. face Face model.
Optionally, in any of the above-described embodiment of the method for the application, it is described U.S. the face feature of semanteme and three-dimensional face model it Between mapping relations, comprising: the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model;
The acquisition three-dimensional face model, comprising:
Obtain the shape of face parameter of three-dimensional face model and the three-dimensional face model;
The variable quantity according to the U.S. face feature of semanteme parameter, based on the U.S. face feature of semanteme and three-dimensional face model it Between mapping relations, determine the variable quantity of the three-dimensional face model, comprising:
According to the variable quantity of the U.S. face feature of semanteme parameter, the shape of face based on U.S. the face feature of semanteme and three-dimensional face model Between mapping relations, determine the variable quantity of the shape of face parameter of the three-dimensional face model;
The variable quantity according to the three-dimensional face model updates the three-dimensional face model, three after obtaining U.S. face Tie up faceform, comprising:
According to the variable quantity of the shape of face parameter of the three-dimensional face model, the shape of face ginseng of the three-dimensional face model is updated Number;
Three-dimensional face model according to the shape of face parameter of the updated three-dimensional face model, after obtaining U.S. face.
Optionally, in any of the above-described embodiment of the method for the application, further includes:
Pre-establish the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model.
Optionally, described to pre-establish the U.S. face feature of semanteme and three in any of the above-described embodiment of the method for the application Tie up the mapping relations between the shape of face of faceform, comprising:
At least one U.S. face feature of semanteme is quantified according to multiple three-dimensional face models, obtains U.S. face feature of semanteme ginseng Number, wherein the shape of face parameter value of the multiple three-dimensional face model is not identical, and expression parameter is predetermined value;
According to the shape of face parameter of U.S. the face feature of semanteme parameter and the multiple three-dimensional face model, the U.S. face is determined Mapping relations between the feature of semanteme and the shape of face of three-dimensional face model.
Optionally, in any of the above-described embodiment of the method for the application, it is described according to multiple three-dimensional face models at least one The U.S. face feature of semanteme of kind is quantified, comprising:
Determine that each three-dimensional face model is at least one U.S. face feature of semanteme in the multiple three-dimensional face model In every kind of U.S. face feature of semanteme weight, using the weight as U.S. face feature of semanteme parameter.
Optionally, every in the multiple three-dimensional face model of determination in any of the above-described embodiment of the method for the application Weight of a three-dimensional face model to every kind of U.S. face feature of semanteme at least one U.S. face feature of semanteme, comprising:
Determine the average three-dimensional face model of the multiple three-dimensional face model;
It is modified, is obtained according to shape of face parameter of the described every kind U.S. face feature of semanteme to the average three-dimensional face model Average three-dimensional face model after U.S. face;
According to the average three-dimensional people after the multiple three-dimensional face model, the average three-dimensional face model and the U.S. face Face model determines that each three-dimensional face model is to the power of the described every kind U.S. face feature of semanteme in the multiple three-dimensional face model Weight.
Optionally, in any of the above-described embodiment of the method for the application, it is described according to multiple three-dimensional face models at least one Before the U.S. face feature of semanteme of kind is quantified, further includes:
Obtain the shape of face parameter and described at least one of the multiple three-dimensional face model and the multiple three-dimensional face model The U.S. face feature of semanteme of kind.
Optionally, described to obtain the multiple three-dimensional face model and institute in any of the above-described embodiment of the method for the application Before the shape of face parameter and at least one U.S. face feature of semanteme of stating multiple three-dimensional face models, further includes:
The multiple three-dimensional face model and the multiple three-dimensional face model are obtained based on universal three-dimensional human face model Shape of face parameter.
Optionally, described based on described in universal three-dimensional human face model acquisition in any of the above-described embodiment of the method for the application The shape of face parameter of multiple three-dimensional face models and the multiple three-dimensional face model, comprising:
Establish universal three-dimensional human face model, wherein the universal three-dimensional human face model includes expression parameter and shape of face parameter;
The expression parameter is set as predetermined value, changes the shape of face parameter value, obtains the multiple three-dimensional face model And the shape of face parameter of the multiple three-dimensional face model.
Optionally, in any of the above-described embodiment of the method for the application, the U.S. face feature of semanteme of the acquisition, and according to the beauty The face feature of semanteme, based on the mapping relations between the U.S. face feature of semanteme and three-dimensional face model, to the three-dimensional face model into Row adjusts, after the three-dimensional face model after obtaining U.S. face, further includes:
Two-dimension human face image according to the three-dimensional face model after the U.S. face, after generating U.S. face.
According to the other side of the embodiment of the present application, a kind of U.S. face processing unit is provided, comprising:
Acquiring unit, for obtaining three-dimensional face model;
Beautiful Yan Danyuan, for obtaining the U.S. face feature of semanteme, and according to the U.S. face feature of semanteme, based on the U.S. face feature of semanteme Mapping relations between three-dimensional face model are adjusted the three-dimensional face model, the three-dimensional face after obtaining U.S. face Model.
Optionally, in the application above-mentioned apparatus embodiment, further includes: reconstruction unit, for being based on two-dimension human face image It rebuilds and obtains the three-dimensional face model.
Optionally, in any of the above-described Installation practice of the application, the beauty Yan Danyuan, comprising:
First obtains module, for obtaining the variable quantity of U.S. face feature of semanteme parameter;
Conversion module, for the variable quantity according to the U.S. face feature of semanteme parameter, based on the U.S. face feature of semanteme and three-dimensional Mapping relations between faceform determine the variable quantity of the three-dimensional face model;
Update module updates the three-dimensional face model for the variable quantity according to the three-dimensional face model, obtains beauty Three-dimensional face model after face.
Optionally, in any of the above-described Installation practice of the application, it is described U.S. the face feature of semanteme and three-dimensional face model it Between mapping relations, comprising: the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model;
The acquiring unit, for obtaining three-dimensional face model and taking the shape of face parameter of the three-dimensional face model;
The conversion module, for the variable quantity according to the U.S. face feature of semanteme parameter, based on the U.S. face feature of semanteme with Mapping relations between the shape of face of three-dimensional face model determine the variable quantity of the shape of face parameter of the three-dimensional face model;
The update module updates the three-dimensional for the variable quantity according to the shape of face parameter of the three-dimensional face model The shape of face parameter of faceform;Three-dimensional people according to the shape of face parameter of the updated three-dimensional face model, after obtaining U.S. face Face model.
Optionally, in any of the above-described Installation practice of the application, further includes: pretreatment unit, for pre-establishing State the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model.
Optionally, in any of the above-described Installation practice of the application, the pretreatment unit, comprising:
Quantization modules are obtained for being quantified according to multiple three-dimensional face models at least one U.S. face feature of semanteme U.S. face feature of semanteme parameter, wherein the shape of face parameter value of the multiple three-dimensional face model is not identical, and expression parameter is predetermined Value;
Processing module, for being joined according to the shape of face of the U.S. face feature of semanteme parameter and the multiple three-dimensional face model Number determines the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model.
Optionally, in any of the above-described Installation practice of the application, the quantization modules, for determining the multiple three-dimensional Each three-dimensional face model is to the weight of every kind of U.S. face feature of semanteme at least one U.S. face feature of semanteme in faceform, Using the weight as U.S. face feature of semanteme parameter.
Optionally, in any of the above-described Installation practice of the application, the quantization modules, for determining the multiple three-dimensional The average three-dimensional face model of faceform;According to the described every kind U.S. face feature of semanteme to the face of the average three-dimensional face model Shape parameter is modified, the average three-dimensional face model after obtaining U.S. face;According to the multiple three-dimensional face model, described average Average three-dimensional face model after three-dimensional face model and the U.S. face, determines each three-dimensional in the multiple three-dimensional face model Weight of the faceform to the described every kind U.S. face feature of semanteme.
Optionally, in any of the above-described Installation practice of the application, the pretreatment unit, further includes:
Second obtains module, for obtaining the shape of face of the multiple three-dimensional face model and the multiple three-dimensional face model Parameter and at least one U.S. face feature of semanteme.
Optionally, in any of the above-described Installation practice of the application, the pretreatment unit, further includes:
Generation module, for obtaining the multiple three-dimensional face model and the multiple three based on universal three-dimensional human face model Tie up the shape of face parameter of faceform.
Optionally, in any of the above-described Installation practice of the application, the generation module, for establishing general three-dimensional face Model, wherein the universal three-dimensional human face model includes expression parameter and shape of face parameter;It is predetermined for setting the expression parameter Value, changes the shape of face parameter value, obtains the shape of face ginseng of the multiple three-dimensional face model and the multiple three-dimensional face model Number.
Optionally, in any of the above-described Installation practice of the application, further includes:
Generation unit, for the two-dimension human face image according to the three-dimensional face model after the U.S. face, after generating U.S. face.
According to the another aspect of the embodiment of the present application, a kind of electronic equipment provided, including any of the above-described embodiment institute The device stated.
According to another aspect of the embodiment of the present application, a kind of electronic equipment that provides, comprising:
Memory, for storing executable instruction;And
Processor completes method described in any of the above-described embodiment for executing the executable instruction.
According to another aspect of the embodiment of the present application, a kind of computer program provided, including computer-readable code, When the computer-readable code is run in equipment, the processor in the equipment is executed for realizing any of the above-described implementation The instruction of example the method.
According to another aspect of the embodiment of the present application, a kind of computer program product provided, for storing computer Readable instruction, described instruction is performed so that computer executes method described in any of the above-described embodiment.
In an optional embodiment, the computer program product is specially computer storage medium, at another In optional embodiment, the computer program product is specially software product, such as SDK etc..
U.S. face treating method and apparatus, electronic equipment and the computer storage provided based on the above embodiments of the present application is situated between Matter, by obtaining three-dimensional face model, wherein three-dimensional face model is based on two-dimension human face image and rebuilds acquisition, obtains U.S. face language Adopted characteristic, and according to the U.S. face feature of semanteme, based on the mapping relations between the U.S. face feature of semanteme and three-dimensional face model, to three-dimensional Faceform is adjusted, the three-dimensional face model after obtaining U.S. face, carries out U.S. face using three-dimensional face model and handles, U.S. face Shape of face variation, is not limited by shape of face amplitude of variation, is not also limited by model angle, it is hereby achieved that preferably U.S. face effect Fruit.
Below by drawings and examples, the technical solution of the application is described in further detail.
Detailed description of the invention
The attached drawing for constituting part of specification describes embodiments herein, and together with description for explaining The principle of the application.
The application can be more clearly understood according to following detailed description referring to attached drawing, in which:
Fig. 1 is the flow chart of the U.S. face processing method of some embodiments of the application;
Fig. 2 is that the mapping that some embodiments of the application are established between the U.S. face feature of semanteme and the shape of face of three-dimensional face model is closed The flow chart of system;
Fig. 3 is for some embodiments of the application according to multiple three-dimensional face models at least one U.S. face feature of semanteme amount of progress Change, obtains the flow chart of U.S. face feature of semanteme parameter;
Fig. 4 is the structural schematic diagram of the U.S. face processing unit of some embodiments of the application;
Fig. 5 is the structural schematic diagram of the U.S. face unit of some embodiments of the application;
Fig. 6 is the structural schematic diagram of the U.S. face processing unit of other embodiments of the application;
Fig. 7 is the structural schematic diagram of the pretreatment unit of some embodiments of the application;
Fig. 8 is the structural schematic diagram of the pretreatment unit of other embodiments of the application;
Fig. 9 is the structural schematic diagram for the electronic equipment that some embodiments of the application provide.
Specific embodiment
The various exemplary embodiments of the application are described in detail now with reference to attached drawing.It should also be noted that unless in addition having Body explanation, the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally The range of application.
Simultaneously, it should be appreciated that for ease of description, the size of various pieces shown in attached drawing is not according to reality Proportionate relationship draw.
Be to the description only actually of at least one exemplary embodiment below it is illustrative, never as to the application And its application or any restrictions used.
Technology, method and apparatus known to person of ordinary skill in the relevant may be not discussed in detail, but suitable In the case of, the technology, method and apparatus should be considered as part of specification.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, then in subsequent attached drawing does not need that it is further discussed.
The embodiment of the present application can be applied to computer system/server, can be with numerous other general or specialized calculating System environments or configuration operate together.Suitable for be used together with computer system/server well-known computing system, ring The example of border and/or configuration includes but is not limited to: personal computer system, server computer system, thin client, thick client Machine, hand-held or laptop devices, microprocessor-based system, set-top box, programmable consumer electronics, NetPC Network PC, Little type Ji calculates machine Xi Tong ﹑ large computer system and the distributed cloud computing technology environment including above-mentioned any system, etc..
Computer system/server can be in computer system executable instruction (such as journey executed by computer system Sequence module) general context under describe.In general, program module may include routine, program, target program, component, logic, number According to structure etc., they execute specific task or realize specific abstract data type.Computer system/server can be with Implement in distributed cloud computing environment, in distributed cloud computing environment, task is long-range by what is be linked through a communication network Manage what equipment executed.In distributed cloud computing environment, it includes the Local or Remote meter for storing equipment that program module, which can be located at, It calculates in system storage medium.
Fig. 1 is the flow chart of the U.S. face processing method of some embodiments of the application.This method can be by server or terminal Equipment executes, terminal device for example, mobile phone, computer, mobile unit etc..As shown in Figure 1, this method comprises:
102, obtain three-dimensional face model.
Optionally, three-dimensional face model can be rebuild based on two-dimension human face image and be obtained.It, can in an optional example The three-dimensional face model obtained is rebuild based on two-dimension human face image to be stored in advance, obtains pre-stored three-dimensional face model. In another optional example, it can be rebuild based on two-dimension human face image before obtaining three-dimensional face model and obtain three-dimensional Faceform obtains the three-dimensional face model rebuild and obtained.The present embodiment to the acquisition methods of three-dimensional face model without limitation.
It is alternatively possible to the key point of default two-dimension human face image, according to the key point of two-dimension human face image and priori Corresponding relationship between the key point of three-dimensional face model generates three-dimensional face model, for example, multiple key points may include face Portion's outer profile key point, eye key point, eyebrow key point, lip key point, nose key point etc., the present embodiment is to key point Type and quantity be not construed as limiting.
Optionally, the three-dimensional face model of priori can be principal component analysis (Principal Component Analysis, abbreviation PCA) model, or other three-dimensional face models in addition to principal component analysis, the embodiment of the present application pair The type of the three-dimensional face model of priori is not construed as limiting.
104, the U.S. face feature of semanteme is obtained, and according to the U.S. face feature of semanteme, based on the U.S. face feature of semanteme and three-dimensional face mould Mapping relations between type, are adjusted three-dimensional face model, the three-dimensional face model after obtaining U.S. face.
In the present embodiment, the U.S. face feature of semanteme can include but is not limited to thin face, fat face, big eye, ommatidium, high nose, thin At least one of nose, smallmouth, great Zui, thin eyebrow and thick eyebrow etc. can preset a variety of U.S. face feature of semanteme, from preset U.S. face language At least one U.S. face feature of semanteme is chosen in adopted characteristic, obtains the U.S. face feature of semanteme, for example, the U.S. face feature of semanteme obtained is thin Face, big eye and high nose.
It is alternatively possible to obtain the variable quantity of U.S. face feature of semanteme parameter, according to the variable quantity of U.S. face feature of semanteme parameter, Based on the mapping relations between the U.S. face feature of semanteme and three-dimensional face model, the variable quantity of three-dimensional face model is determined, then root According to the variable quantity of three-dimensional face model, three-dimensional face model is updated, the three-dimensional face model after obtaining U.S. face.
In an optional example, the mapping relations between the U.S. face feature of semanteme and three-dimensional face model are beauty Yan Yuyi Mapping relations between characteristic and three-dimensional face model shape of face, the then shape of available human face three-dimensional model and three-dimensional face model Shape parameter, according to the variable quantity of U.S. face feature of semanteme parameter, based between the U.S. face feature of semanteme and the shape of face of three-dimensional face model Mapping relations, the variable quantity of the shape of face parameter of three-dimensional face model is determined, then according to the shape of face parameter of three-dimensional face model Variable quantity, update the shape of face parameter of three-dimensional face model, according to the shape of face parameter of updated three-dimensional face model, obtain beauty Three-dimensional face model after face.
Based on the U.S. face processing method that the above embodiments of the present application provide, by obtaining three-dimensional face model, and acquisition beauty The face feature of semanteme, and according to the U.S. face feature of semanteme, it is right based on the mapping relations between the U.S. face feature of semanteme and three-dimensional face model Three-dimensional face model is adjusted, the three-dimensional face model after obtaining U.S. face.It carries out U.S. face using three-dimensional face model to handle, beauty The shape of face of face changes, and is not limited by shape of face amplitude of variation, is not also limited by model angle, it is hereby achieved that preferably beauty Face effect.
In some embodiments, the shape of face parameter of three-dimensional face model M is vector wid, the variation of U.S. face feature of semanteme parameter Amount is vector Δ f=[Δ f0,...,Δfl]T, Δ fj>=0, j=1 ..., l, the U.S. face feature of semanteme and three-dimensional face model Mapping relations between shape of face are matrix H, then the variable quantity of the shape of face parameter of three-dimensional face model is H Δ f at this time, updated The shape of face parameter of three-dimensional face model is wid=wid+ H Δ f, by the shape of face parameter w of updated three-dimensional face modelidBring three into Tie up the formula of faceformThree-dimensional face model after available U.S. face, wherein CrFor three-dimensional people The tensor of face model, wexpFor the expression parameter of three-dimensional face model.
In some embodiments, the U.S. face feature of semanteme is obtained in operation 104, and according to the U.S. face feature of semanteme, based on U.S. face Mapping relations between the feature of semanteme and three-dimensional face model, are adjusted three-dimensional face model, the three-dimensional after obtaining U.S. face After faceform, further includes: according to the three-dimensional face model after U.S. face, two-dimension human face image after generating U.S. face, with extension The application range of U.S. face processing method.
In some embodiments, the mapping between the U.S. face feature of semanteme and the shape of face of three-dimensional face model can be pre-established Then relationship utilizes for example, the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model can be pre-established Mapping relations between the established U.S. face feature of semanteme and the shape of face of three-dimensional face model carry out U.S. face and handle.Below in conjunction with Fig. 2, detailed description establish the process of the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model.
As shown in Fig. 2, this method comprises:
202, at least one U.S. face feature of semanteme is quantified according to multiple three-dimensional face models, it is semantic special to obtain U.S. face Property parameter.
In the present embodiment, multiple three-dimensional face models can be not identical for shape of face parameter value, and expression parameter is predetermined value Three-dimensional face model utilize expressionless three for example, predetermined value is expression parameter value when keeping three-dimensional face model amimia Dimension faceform quantifies the U.S. face feature of semanteme, can influence to avoid expression factor to the U.S. face feature of semanteme, make U.S. face The feature of semanteme is only related to the shape of face of three-dimensional face model.
It is alternatively possible to determine that each three-dimensional face model is at least one U.S. face feature of semanteme in multiple three-dimensional face models In the weight of every kind of U.S. face feature of semanteme realize using identified weight as U.S. face feature of semanteme parameter to the semantic spy of U.S. face The quantization of property, but the present embodiment is not limited thereto.
204, according to the shape of face parameter of U.S. face feature of semanteme parameter and multiple three-dimensional face models, determine the U.S. face feature of semanteme Mapping relations between the shape of face of three-dimensional face model.
It is alternatively possible to by establish reflect U.S. face feature of semanteme parameter and multiple three-dimensional face models shape of face parameter it Between relationship equation, pass through and solve the mapping that the equation obtains between the U.S. face feature of semanteme and the shape of face of three-dimensional face model and close System.
The present embodiment by pre-establishing the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model, When carrying out U.S. face processing to three-dimensional face model, the beautiful Yan Yuyi pre-established can directly be utilized according to the U.S. face feature of semanteme Mapping relations between characteristic and the shape of face of three-dimensional face model, the shape of face of the three-dimensional face model after obtaining U.S. face, Ke Yijie About a large amount of calculation resources shorten the time of U.S. face processing.
Fig. 3 is for some embodiments of the application according to multiple three-dimensional face models at least one U.S. face feature of semanteme amount of progress Change, obtains the flow chart of U.S. face feature of semanteme parameter.
As shown in figure 3, this method comprises:
302, determine the average three-dimensional face model of multiple three-dimensional face models.
It is alternatively possible to not identical according to multiple shape of face parameter values, expression parameter is the three-dimensional face model of predetermined value, really Allocate equal three-dimensional face model.For example, the arithmetic mean of the corresponding shape of face parameter of the multiple three-dimensional face models of calculating can be passed through Value obtains the average corresponding shape of face parameter of three-dimensional face model, so that it is determined that the shape of face of average three-dimensional face model.
304, it is modified according to every kind of U.S. face feature of semanteme to the shape of face parameter of average three-dimensional face model, obtains U.S. face Average three-dimensional face model afterwards.
It is alternatively possible to which manually or automatically method is to average three-dimensional face according to every kind of U.S. face feature of semanteme The shape of face parameter of model is modified, the average three-dimensional face model after obtaining U.S. face.
306, according to the average three-dimensional face model after multiple three-dimensional face models, average three-dimensional face model and U.S. face, Determine that each three-dimensional face model is to the weight of every kind of U.S. face feature of semanteme in multiple three-dimensional face models.
It is alternatively possible to not identical according to multiple shape of face parameter values, expression parameter be predetermined value three-dimensional face model and Average three-dimensional face model after its average three-dimensional face model, and U.S. face determines each three-dimensional people in multiple three-dimensional face models Face model is to the weight of every kind of U.S. face feature of semanteme at least one U.S. face feature of semanteme, using identified weight as U.S. face language Adopted characterisitic parameter.
The present embodiment has redefined the measurement method of the U.S. face feature of semanteme, can efficiently and accurately quantify beautiful Yan Yuyi Characteristic.
In some embodiments, operation 202 according to multiple three-dimensional face models at least one U.S. face feature of semanteme into Before row quantization, further includes: obtain the shape of face parameter of multiple three-dimensional face models and multiple three-dimensional face models, and at least one The U.S. face feature of semanteme.
It is alternatively possible to obtain multiple three-dimensional face models and multiple three-dimensional face models based on universal three-dimensional human face model Shape of face parameter, with the acquisition of the multiple three-dimensional face models of simplification, while convenient for obtaining the shape of face of multiple three-dimensional face models ginseng Number.In an optional example, universal three-dimensional human face model can establish, wherein universal three-dimensional human face model may include table Feelings parameter and shape of face parameter by setting expression parameter as predetermined value, and change shape of face parameter value, obtain multiple three-dimensional face moulds The shape of face parameter of type and multiple three-dimensional face models.
In some embodiments, bilinearity PCA mould can be established first using pca model as universal three-dimensional human face model Type:Arbitrary three-dimensional face model is described, due to for one group of widAnd wexpParticular value, can be with It determines a specific three-dimensional face model X, therefore passes through setting wexpThe expression parameter for being three-dimensional face model when amimia Value, and pass through setting widFor different shape of face parameter values, multiple three-dimensional face model M can be obtainedi, by MiAnd widForm one group Data, wherein i=1 ..., n, so as to obtain n group data, such as: n 150.
Then the l kind U.S. face feature of semanteme is determined, and according to following operation (1) to (3) to the l kind U.S. face feature of semanteme amount of progress Change:
(1) data based on n group three-dimensional face model calculate average three-dimensional face model
(2) average three-dimensional face model is respectively modified with the l kind U.S. face feature of semantemeBeing averaged after obtaining U.S. face
Three-dimensional face modelWherein j=1 ..., l;
(3) each three-dimensional face model M is calculatediTo the weight of every kind of U.S. face feature of semanteme j,
Wherein, weight fi jIt can be positive value, or negative value, when the U.S. face feature of semanteme is to increased direction change When, corresponding weight fi jFor positive value, such as: when the U.S. face feature of semanteme be it is big at the moment, fi jFor positive value, when the U.S. face feature of semanteme is To reduction direction change when, corresponding weight fi jFor negative value, such as: when the U.S. face feature of semanteme be it is small at the moment, fi jFor negative value.
Finally calculate the transfer matrix H of shape of face of the U.S. face feature of semanteme to three-dimensional face model, it is known that n three-dimensional face mould The weight of each three-dimensional face model face feature of semanteme U.S. for every kind in the l kind U.S. face feature of semanteme is f in typei j, then (l+ is formed Matrix F 1*n):
Wherein, the value that the jth row i-th of F arranges is fi j, the value of the l+1 row of F is all 1.
The matrix P of (m*n) is constructed, wherein each shape of face parameter w for being classified as i-th of human face three-dimensional model in Pid,Then transfer matrix H meets HF=P, calculates transfer matrix H=PF+, wherein F+For the pseudo inverse matrix of F.
Fig. 4 is the structural schematic diagram of the U.S. face processing unit of some embodiments of the application.The device can be set in by taking Business device or terminal device execute, terminal device for example, mobile phone, computer, mobile unit etc..As shown in figure 4, the device packet It includes: acquiring unit 410 and U.S. face unit 420.Wherein,
Acquiring unit 410, for obtaining three-dimensional face model.
Optionally, three-dimensional face model can be rebuild based on two-dimension human face image and be obtained.It, can in an optional example The three-dimensional face model obtained is rebuild based on two-dimension human face image to be stored in advance, acquiring unit 410 obtains pre-stored three Tie up faceform.In another optional example, two can be based on by reconstruction unit before obtaining three-dimensional face model It ties up face image and obtains three-dimensional face model, acquiring unit 410 obtains the three-dimensional face model rebuild and obtained.The present embodiment The method of three-dimensional face model is obtained without limitation to acquiring unit 410.
Optionally, acquiring unit 410 can preset the key point of two-dimension human face image, according to the key of two-dimension human face image Corresponding relationship between point and the key point of the three-dimensional face model of priori generates three-dimensional face model, for example, multiple key points It may include face's outer profile key point, eye key point, eyebrow key point, lip key point, nose key point etc., this implementation Example is not construed as limiting the type and quantity of key point.
Optionally, the three-dimensional face model of priori can be principal component analysis (Principal Component Analysis, abbreviation PCA) model, or other three-dimensional face models in addition to principal component analysis, the embodiment of the present application pair The type of the three-dimensional face model of priori is not construed as limiting.
U.S. face unit 420, for obtaining the U.S. face feature of semanteme, and according to the U.S. face feature of semanteme, based on the U.S. face feature of semanteme Mapping relations between three-dimensional face model, are adjusted three-dimensional face model, the three-dimensional face model after obtaining U.S. face.
In the present embodiment, the U.S. face feature of semanteme can include but is not limited to thin face, fat face, big eye, ommatidium, high nose, thin At least one of nose, smallmouth, great Zui, thin eyebrow and thick eyebrow etc. can preset a variety of U.S. face feature of semanteme, from preset U.S. face language At least one U.S. face feature of semanteme is chosen in adopted characteristic, obtains the U.S. face feature of semanteme, for example, the U.S. face feature of semanteme obtained is thin Face, big eye and high nose.
Optionally, as shown in figure 5, U.S. face unit may include the first acquisition module 510, conversion module 520 and update mould Block 530, wherein first obtains the variable quantity of the available U.S. face feature of semanteme parameter of module 510, and conversion module 520 is used for root It is determined according to the variable quantity of U.S. face feature of semanteme parameter based on the mapping relations between the U.S. face feature of semanteme and three-dimensional face model The variable quantity of three-dimensional face model, update module 530 are used for the variable quantity according to three-dimensional face model, update three-dimensional face mould Type, the three-dimensional face model after obtaining U.S. face.
In an optional example, the mapping relations between the U.S. face feature of semanteme and three-dimensional face model are beauty Yan Yuyi Mapping relations between characteristic and three-dimensional face model shape of face, the then available human face three-dimensional model of acquiring unit 410 and three-dimensional The form parameter of faceform, conversion module 520 are used for the variable quantity according to U.S. face feature of semanteme parameter, semantic special based on U.S. face Property and the shape of face of three-dimensional face model between mapping relations, determine the variable quantity of the shape of face parameter of three-dimensional face model, update Module 530 is used for the variable quantity of the shape of face parameter according to three-dimensional face model, updates the shape of face parameter of three-dimensional face model, according to The shape of face parameter of updated three-dimensional face model, the three-dimensional face model after obtaining U.S. face.
Based on the U.S. face processing unit that the above embodiments of the present application provide, by obtaining three-dimensional face model, wherein three-dimensional Faceform is based on two-dimension human face image and rebuilds acquisition, the U.S. face feature of semanteme is obtained, and according to the U.S. face feature of semanteme, based on U.S. face Mapping relations between the feature of semanteme and three-dimensional face model, are adjusted three-dimensional face model, the three-dimensional after obtaining U.S. face Faceform carries out U.S. face using three-dimensional face model and handles, and the shape of face variation of U.S. face is not limited by shape of face amplitude of variation, Also it is not limited by model angle, it is hereby achieved that preferably beauty Yan Xiaoguo.
In some embodiments, U.S. face processing unit further includes generation unit, and generation unit is used for according to three after U.S. face Faceform is tieed up, the two-dimension human face image after generating U.S. face, to extend the application range of U.S. face processing method.
Fig. 6 is the structural schematic diagram of the U.S. face processing unit of other embodiments of the application.As shown in fig. 6, the device removes It include outside acquiring unit 410 and U.S. face unit 420, further includes: pretreatment unit 430.Wherein, pretreatment unit 430 is used for Pre-establish the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model.For example, pretreatment unit 430 can be with The mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model are pre-established, then acquiring unit 410 and U.S. face Unit 420 carries out U.S. face using the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model and handles.Below will In conjunction with Fig. 7, the structure of pretreatment unit 430 is described in detail.
As shown in fig. 7, pretreatment unit 430 includes: quantization modules 710 and processing module 720.Wherein,
Quantization modules 710 are obtained for being quantified according to multiple three-dimensional face models at least one U.S. face feature of semanteme To U.S. face feature of semanteme parameter.
In the present embodiment, multiple three-dimensional face models can be not identical for shape of face parameter value, and expression parameter is predetermined value Three-dimensional face model utilize expressionless three for example, predetermined value is expression parameter value when keeping three-dimensional face model amimia Dimension faceform quantifies the U.S. face feature of semanteme, can influence to avoid expression factor to the U.S. face feature of semanteme, make U.S. face The feature of semanteme is only related to the shape of face of three-dimensional face model.
Optionally, quantization modules 710 can determine that each three-dimensional face model is at least one in multiple three-dimensional face models The weight of every kind of U.S. face feature of semanteme is realized using identified weight as U.S. face feature of semanteme parameter in the U.S. face feature of semanteme Quantization to the U.S. face feature of semanteme, but the present embodiment is not limited thereto.
Processing module 720 is determined for the shape of face parameter according to U.S. face feature of semanteme parameter and multiple three-dimensional face models Mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model.
Optionally, processing module 720 can reflect U.S. face feature of semanteme parameter and multiple three-dimensional face models by establishing Shape of face parameter between relationship equation, pass through and solve the shape of face that the equation obtains U.S. the face feature of semanteme and three-dimensional face model Between mapping relations.
The present embodiment by pre-establishing the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model, When carrying out U.S. face processing to three-dimensional face model, the beautiful Yan Yuyi pre-established can directly be utilized according to the U.S. face feature of semanteme Mapping relations between characteristic and the shape of face of three-dimensional face model, the shape of face of the three-dimensional face model after obtaining U.S. face, Ke Yijie About a large amount of calculation resources shorten the time of U.S. face processing.
In some embodiments, quantization modules 710 can determine the average three-dimensional face model of multiple three-dimensional face models, Then it is modified according to every kind of U.S. face feature of semanteme to the shape of face parameter of average three-dimensional face model, being averaged after obtaining U.S. face Three-dimensional face model, according to the average three-dimensional face model after multiple three-dimensional face models, average three-dimensional face model and U.S. face, Determine that each three-dimensional face model is to the weight of every kind of U.S. face feature of semanteme in multiple three-dimensional face models.The present embodiment is again fixed The justice measurement method of the U.S. face feature of semanteme, can efficiently and accurately quantify the U.S. face feature of semanteme.
In some embodiments, as shown in figure 8, pretreatment unit 430 is in addition to including quantization modules 710 and processing module It further include the second acquisition module 730, the second acquisition module 730 is for obtaining multiple three-dimensional face models and multiple three-dimensionals outside 720 The shape of face parameter of faceform, and at least one U.S. face feature of semanteme.Optionally, as shown in figure 8, pretreatment unit can also wrap Generation module 740 is included, generation module 740 is used to obtain multiple three-dimensional face models and multiple three based on universal three-dimensional human face model The shape of face parameter of faceform is tieed up, with the acquisition of the multiple three-dimensional face models of simplification, while convenient for obtaining multiple three-dimensional face moulds The shape of face parameter of type.In an optional example, generation module 740 can establish universal three-dimensional human face model, wherein general Three-dimensional face model may include expression parameter and shape of face parameter, by setting expression parameter as predetermined value, and change shape of face ginseng Numerical value obtains the shape of face parameter of multiple three-dimensional face models and multiple three-dimensional face models.
The embodiment of the present application also provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down Plate computer, server etc..Below with reference to Fig. 9, it illustrates the terminal device or the services that are suitable for being used to realize the embodiment of the present application The structural schematic diagram of the electronic equipment 900 of device: as shown in figure 9, electronic equipment 900 includes one or more processors, communication unit For example Deng, one or more of processors: one or more central processing unit (CPU) 901, and/or it is one or more plus Fast unit 913 etc., accelerator module 913 may include but be not limited to GPU, FPGA, other kinds of application specific processor etc., and processor can According to the executable instruction that is stored in read-only memory (ROM) 902 or to be loaded into random access from storage section 908 and deposit Executable instruction in reservoir (RAM) 903 and execute various movements appropriate and processing.Communication unit 912 may include but be not limited to Network interface card, the network interface card may include but be not limited to IB (Infiniband) network interface card, processor can with read-only memory 902 and/or with Machine accesses communication in memory 903 and is connected by bus 904 with communication unit 912 and with executing executable instruction through communication unit 912 communicate with other target devices, so that the corresponding operation of method any one of provided by the embodiments of the present application is completed, for example, obtaining Take three-dimensional face model;The U.S. face feature of semanteme is obtained, and according to the U.S. face feature of semanteme, based on the U.S. face feature of semanteme and three-dimensional Mapping relations between faceform are adjusted the three-dimensional face model, the three-dimensional face model after obtaining U.S. face.
In addition, in RAM 903, various programs and data needed for being also stored with device operation.CPU901,ROM902 And RAM903 is connected with each other by bus 904.In the case where there is RAM903, ROM902 is optional module.RAM903 storage Executable instruction, or executable instruction is written into ROM902 at runtime, executable instruction executes central processing unit 901 The corresponding operation of above-mentioned communication means.Input/output (I/O) interface 905 is also connected to bus 904.Communication unit 912 can integrate Setting, may be set to be with multiple submodule (such as multiple IB network interface cards), and in bus link.
I/O interface 905 is connected to lower component: the importation 906 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 907 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 908 including hard disk etc.; And the communications portion 909 of the network interface card including LAN card, modem etc..Communications portion 909 via such as because The network of spy's net executes communication process.Driver 910 is also connected to I/O interface 905 as needed.Detachable media 911, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 910, in order to read from thereon Computer program be mounted into storage section 908 as needed.
It should be noted that framework as shown in Figure 9 is only a kind of optional implementation, it, can root during concrete practice The component count amount and type of above-mentioned Fig. 9 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component Set, can also be used it is separately positioned or integrally disposed and other implementations, such as the separable setting of accelerator module 913 and CPU901 or Accelerator module 913 can be integrated on CPU901 by person, and the separable setting of communication unit 912 can also be integrally disposed in CPU901 or add On fast unit 913, etc..These interchangeable embodiments each fall within protection scope disclosed in the present application.
Particularly, according to an embodiment of the present application, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiments herein includes a kind of computer program product comprising be tangibly embodied in machine readable Computer program on medium, computer program include the program code for method shown in execution flow chart, program code It may include the corresponding instruction of corresponding execution method and step provided by the embodiments of the present application, for example, obtaining three-dimensional face model;It obtains The U.S. face feature of semanteme, and according to the U.S. face feature of semanteme, based on the mapping between the U.S. face feature of semanteme and three-dimensional face model Relationship is adjusted the three-dimensional face model, the three-dimensional face model after obtaining U.S. face.In such embodiments, should Computer program can be downloaded and installed from network by communications portion 909, and/or be mounted from detachable media 911. When the computer program is executed by central processing unit (CPU) 901, the above-mentioned function of limiting in the present processes is executed.
In one or more optional embodiments, the embodiment of the present application also provides a kind of productions of computer program program Product, for storing computer-readable instruction, which is performed so that computer executes any of the above-described possible implementation In U.S. face processing method.
The computer program product can be realized especially by hardware, software or its mode combined.In an alternative embodiment In son, which is embodied as computer storage medium, in another optional example, the computer program Product is embodied as software product, such as software development kit (Software Development Kit, SDK) etc..
In one or more optional embodiments, the embodiment of the present application also provides a kind of U.S. face processing methods and its right Device, system and electronic equipment, computer storage medium, computer program and the computer program product answered, wherein the party Method includes: first device to the U.S. face instruction of second device transmission, and the instruction is so that second device executes any of the above-described possible reality Apply the U.S. face processing method in example;First device receives the beauty result that second device is sent.
In some embodiments, U.S. face instruction can be specially call instruction, and first device can pass through the side of calling Formula indicates that second device executes U.S. face, and accordingly, in response to call instruction is received, second device can be executed at above-mentioned U.S. face The step and/or process in any embodiment in reason method.
It should be understood that the terms such as " first " in the embodiment of the present application, " second " are used for the purpose of distinguishing, and be not construed as Restriction to the embodiment of the present application.
It should also be understood that in this application, " multiple " can refer to two or more, "at least one" can refer to one, Two or more.
It should also be understood that clearly being limited or no preceding for the either component, data or the structure that are referred in the application In the case where opposite enlightenment given hereinlater, one or more may be generally understood to.
It should also be understood that the application highlights the difference between each embodiment to the description of each embodiment, Same or similar place can be referred to mutually, for sake of simplicity, no longer repeating one by one.
The present processes and device may be achieved in many ways.For example, can by software, hardware, firmware or Software, hardware, firmware any combination realize the present processes and device.The said sequence of the step of for the method Merely to be illustrated, the step of the present processes, is not limited to sequence described in detail above, special unless otherwise It does not mentionlet alone bright.In addition, in some embodiments, also the application can be embodied as to record program in the recording medium, these programs Including for realizing according to the machine readable instructions of the present processes.Thus, the application also covers storage for executing basis The recording medium of the program of the present processes.
The description of the present application is given for the purpose of illustration and description, and is not exhaustively or by the application It is limited to disclosed form.Many modifications and variations are obvious for the ordinary skill in the art.It selects and retouches Embodiment is stated and be the principle and practical application in order to more preferably illustrate the application, and those skilled in the art is enable to manage Solution the application is to design various embodiments suitable for specific applications with various modifications.

Claims (10)

1. a kind of U.S. face processing method characterized by comprising
Obtain three-dimensional face model;
Obtain the U.S. face feature of semanteme, and according to the U.S. face feature of semanteme, based on the U.S. face feature of semanteme and three-dimensional face model it Between mapping relations, the three-dimensional face model is adjusted, the three-dimensional face model after obtaining U.S. face.
2. the method according to claim 1, wherein the acquisition U.S. face feature of semanteme, and according to the U.S. face The feature of semanteme carries out the three-dimensional face model based on the mapping relations between the U.S. face feature of semanteme and three-dimensional face model Adjustment, comprising:
Obtain the variable quantity of U.S. face feature of semanteme parameter;
According to the variable quantity of the U.S. face feature of semanteme parameter, based on the mapping between the U.S. face feature of semanteme and three-dimensional face model Relationship determines the variable quantity of the three-dimensional face model;
According to the variable quantity of the three-dimensional face model, the three-dimensional face model is updated, the three-dimensional face mould after obtaining U.S. face Type.
3. according to the method described in claim 2, it is characterized in that, between the U.S. face feature of semanteme and three-dimensional face model Mapping relations, comprising: the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model;
The acquisition three-dimensional face model, comprising:
Obtain the shape of face parameter of three-dimensional face model and the three-dimensional face model;
The variable quantity according to the U.S. face feature of semanteme parameter, based between the U.S. face feature of semanteme and three-dimensional face model Mapping relations determine the variable quantity of the three-dimensional face model, comprising:
According to the variable quantity of the U.S. face feature of semanteme parameter, based between the U.S. face feature of semanteme and the shape of face of three-dimensional face model Mapping relations, determine the variable quantity of the shape of face parameter of the three-dimensional face model;
The variable quantity according to the three-dimensional face model updates the three-dimensional face model, the three-dimensional people after obtaining U.S. face Face model, comprising:
According to the variable quantity of the shape of face parameter of the three-dimensional face model, the shape of face parameter of the three-dimensional face model is updated;
Three-dimensional face model according to the shape of face parameter of the updated three-dimensional face model, after obtaining U.S. face.
4. according to the method described in claim 3, it is characterized by further comprising:
Pre-establish the mapping relations between the U.S. face feature of semanteme and the shape of face of three-dimensional face model.
5. according to the method described in claim 4, it is characterized in that, described pre-establish the U.S. face feature of semanteme and three-dimensional people Mapping relations between the shape of face of face model, comprising:
At least one U.S. face feature of semanteme is quantified according to multiple three-dimensional face models, obtains U.S. face feature of semanteme parameter, Wherein, the shape of face parameter value of the multiple three-dimensional face model is not identical, and expression parameter is predetermined value;
According to the shape of face parameter of U.S. the face feature of semanteme parameter and the multiple three-dimensional face model, the beauty Yan Yuyi is determined Mapping relations between characteristic and the shape of face of three-dimensional face model.
6. according to the method described in claim 5, it is characterized in that, described beautiful at least one according to multiple three-dimensional face models The face feature of semanteme is quantified, comprising:
Determine that each three-dimensional face model is to every at least one U.S. face feature of semanteme in the multiple three-dimensional face model The weight of the U.S. face feature of semanteme of kind, using the weight as U.S. face feature of semanteme parameter.
7. a kind of U.S. face processing unit characterized by comprising
Acquiring unit, for obtaining three-dimensional face model;
Beautiful Yan Danyuan, for obtaining the U.S. face feature of semanteme, and according to the U.S. face feature of semanteme, based on the U.S. face feature of semanteme and three The mapping relations between faceform are tieed up, the three-dimensional face model is adjusted, the three-dimensional face model after obtaining U.S. face.
8. a kind of electronic equipment, which is characterized in that including device as claimed in claim 7.
9. a kind of electronic equipment characterized by comprising
Memory, for storing executable instruction;And
Processor completes method described in any one of claim 1 to 6 for executing the executable instruction.
10. a kind of computer storage medium, for storing computer-readable instruction, which is characterized in that described instruction is held Method described in any one of claim 1 to 6 is realized when row.
CN201811199000.3A 2018-10-15 2018-10-15 U.S. face treating method and apparatus, electronic equipment and computer storage medium Pending CN109584146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811199000.3A CN109584146A (en) 2018-10-15 2018-10-15 U.S. face treating method and apparatus, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811199000.3A CN109584146A (en) 2018-10-15 2018-10-15 U.S. face treating method and apparatus, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN109584146A true CN109584146A (en) 2019-04-05

Family

ID=65920132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811199000.3A Pending CN109584146A (en) 2018-10-15 2018-10-15 U.S. face treating method and apparatus, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN109584146A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473295A (en) * 2019-08-07 2019-11-19 重庆灵翎互娱科技有限公司 A kind of method and apparatus that U.S. face processing is carried out based on three-dimensional face model
CN112987932A (en) * 2021-03-24 2021-06-18 北京百度网讯科技有限公司 Human-computer interaction and control method and device based on virtual image
CN113050795A (en) * 2021-03-24 2021-06-29 北京百度网讯科技有限公司 Virtual image generation method and device
CN113536007A (en) * 2021-07-05 2021-10-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN107820017A (en) * 2017-11-30 2018-03-20 广东欧珀移动通信有限公司 Image capturing method, device, computer-readable recording medium and electronic equipment
CN107886484A (en) * 2017-11-30 2018-04-06 广东欧珀移动通信有限公司 U.S. face method, apparatus, computer-readable recording medium and electronic equipment
CN108550185A (en) * 2018-05-31 2018-09-18 Oppo广东移动通信有限公司 Beautifying faces treating method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN107820017A (en) * 2017-11-30 2018-03-20 广东欧珀移动通信有限公司 Image capturing method, device, computer-readable recording medium and electronic equipment
CN107886484A (en) * 2017-11-30 2018-04-06 广东欧珀移动通信有限公司 U.S. face method, apparatus, computer-readable recording medium and electronic equipment
CN108550185A (en) * 2018-05-31 2018-09-18 Oppo广东移动通信有限公司 Beautifying faces treating method and apparatus

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473295A (en) * 2019-08-07 2019-11-19 重庆灵翎互娱科技有限公司 A kind of method and apparatus that U.S. face processing is carried out based on three-dimensional face model
CN110473295B (en) * 2019-08-07 2023-04-25 重庆灵翎互娱科技有限公司 Method and equipment for carrying out beautifying treatment based on three-dimensional face model
CN112987932A (en) * 2021-03-24 2021-06-18 北京百度网讯科技有限公司 Human-computer interaction and control method and device based on virtual image
CN113050795A (en) * 2021-03-24 2021-06-29 北京百度网讯科技有限公司 Virtual image generation method and device
CN112987932B (en) * 2021-03-24 2023-04-18 北京百度网讯科技有限公司 Human-computer interaction and control method and device based on virtual image
CN113536007A (en) * 2021-07-05 2021-10-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109584146A (en) U.S. face treating method and apparatus, electronic equipment and computer storage medium
CN109255830B (en) Three-dimensional face reconstruction method and device
CN108399383A (en) Expression moving method, device storage medium and program
US11544905B2 (en) Method and apparatus for providing virtual clothing wearing service based on deep-learning
CN109960986A (en) Human face posture analysis method, device, equipment, storage medium and program
US20230095092A1 (en) Denoising diffusion generative adversarial networks
CN110392903A (en) The dynamic of matrix manipulation is rejected
CN110570499B (en) Expression generating method, device, computing equipment and storage medium
CN110517214A (en) Method and apparatus for generating image
CN113643412A (en) Virtual image generation method and device, electronic equipment and storage medium
CN114187633B (en) Image processing method and device, and training method and device for image generation model
CN109410253B (en) For generating method, apparatus, electronic equipment and the computer-readable medium of information
CN113327278A (en) Three-dimensional face reconstruction method, device, equipment and storage medium
CN108235116A (en) Feature propagation method and device, electronic equipment, program and medium
CN111369428A (en) Virtual head portrait generation method and device
US11880912B2 (en) Image processing method, image processing system, and program for colorizing line-drawing images using machine learning
CN110458173A (en) Method and apparatus for generating article color value
CN115965840A (en) Image style migration and model training method, device, equipment and medium
CN109410309A (en) Weight illumination method and device, electronic equipment and computer storage medium
CN108920281A (en) Extensive image processing method and system
CN113052962B (en) Model training method, information output method, device, equipment and storage medium
CN112562043B (en) Image processing method and device and electronic equipment
CN107292940B (en) Method for drawing real-time music frequency spectrum vector graph
Li et al. Enhancing pencil drawing patterns via using semantic information
WO2023207779A1 (en) Image processing method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination